Test Report: none_Linux 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (26/167)

x
+
TestDownloadOnly/v1.20.0/json-events (1.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: exit status 40 (1.45405673s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08c4d851-a1cc-4e47-8cab-f50dec094de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6501c43-f5bc-45f1-a119-6e356a388c3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"ae628f70-683b-43b5-bc8a-92de02a4ab55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c16d8378-6428-45a5-b94c-d790b47393e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig"}}
	{"specversion":"1.0","id":"ca7a3e7c-0d20-4215-a063-202431decbd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube"}}
	{"specversion":"1.0","id":"581fc53e-f51a-429a-8d53-9a851fe1b349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8e720c85-85cb-4a0b-8809-22c8ea3705d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"df186d8a-72b8-4260-9d69-ff3c7c3f77ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the none driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"69ca10ac-c62f-4dfd-99b5-193f3265ef2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"minikube\" primary control-plane node in \"minikube\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c8ca8e6-6cd4-47d4-a181-eeecae63b7d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache binaries: caching binary kubelet: download failed: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200] Decompressors:map[bz2:0xc000600f20 gz:0xc000600f28 tar:0xc000600ed0 tar.bz2:0xc000600ee0 tar.gz:0xc000600ef0 tar.xz:0xc000600f00 tar.zst:0xc000600f10 tbz2:0xc000600ee0 tgz:0xc0006
00ef0 txz:0xc000600f00 tzst:0xc000600f10 xz:0xc000600f30 zip:0xc000600f40 zst:0xc000600f38] Getters:map[file:0xc00188c0c0 http:0xc001888050 https:0xc0018880a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: stream error: stream ID 1; PROTOCOL_ERROR; received from peer","name":"INET_CACHE_BINARIES","url":""}}
	{"specversion":"1.0","id":"6775b2a2-af64-43e7-bc5f-463ef08b76bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:22:28.124062   11069 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:28.124316   11069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:28.124326   11069 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:28.124330   11069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:28.124538   11069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	W0916 10:22:28.124648   11069 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3763/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3763/.minikube/config/config.json: no such file or directory
	I0916 10:22:28.125166   11069 out.go:352] Setting JSON to true
	I0916 10:22:28.126075   11069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":299,"bootTime":1726481849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:28.126165   11069 start.go:139] virtualization: kvm guest
	I0916 10:22:28.128458   11069 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:22:28.128574   11069 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:22:28.128623   11069 notify.go:220] Checking for updates...
	I0916 10:22:28.130017   11069 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:28.131347   11069 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:28.132661   11069 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:22:28.134000   11069 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:22:28.135196   11069 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:28.137411   11069 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:28.137645   11069 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:28.149524   11069 out.go:97] Using the none driver based on user configuration
	I0916 10:22:28.149546   11069 start.go:297] selected driver: none
	I0916 10:22:28.149557   11069 start.go:901] validating driver "none" against <nil>
	I0916 10:22:28.149587   11069 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:22:28.150171   11069 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:28.150976   11069 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 10:22:28.151185   11069 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:28.151224   11069 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.151295   11069 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:22:28.151372   11069 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:28.152781   11069 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:22:28.153248   11069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:22:28.153281   11069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:28.153468   11069 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:22:28.153770   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0916 10:22:28.153767   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	I0916 10:22:28.153776   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet
	I0916 10:22:29.532492   11069 out.go:193] 
	W0916 10:22:29.533823   11069 out_reason.go:110] Failed to cache binaries: caching binary kubelet: download failed: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200] Decompressors:map[bz2:0xc000600f20 gz:0xc000600f28 tar:0xc000600ed0 tar.bz2:0xc000600ee0 tar.gz:0xc000600ef0 tar.xz:0xc000600f00 tar.zst:0xc000600f10 tbz2:0xc000600ee0 tgz:0xc000600ef0 txz:0xc000600f00 tzst:0xc000600f10 xz:0xc000600f30 zip:0xc000600f40 zst:0xc000600f38] Getters:map[file:0xc00188c0c0 http:0xc001888050 https:0xc0018880a0] Dir:false ProgressListener:<nil> I
nsecure:false DisableSymlinks:false Options:[]}: stream error: stream ID 1; PROTOCOL_ERROR; received from peer
	W0916 10:22:29.533836   11069 out_reason.go:110] 
	W0916 10:22:29.535958   11069 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:22:29.537285   11069 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "minikube" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=none" "--bootstrapper=kubeadm"] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (1.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:158: expected the file for binary exist at "/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet" but got error stat /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (301.65s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 9.050087ms
addons_test.go:897: volcano-scheduler stabilized in 9.154345ms
addons_test.go:905: volcano-admission stabilized in 9.299094ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-l88qd" [02de1355-fb28-4bfa-90ee-a97d42abaa06] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003420649s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-t975d" [14c5f0a6-730a-4d1c-9c9a-fdabb951ca19] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003689064s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-kd2r2" [7f6941c0-aba9-4e12-94e2-3918b34deedc] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004085246s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:932: (dbg) Non-zero exit: kubectl --context minikube delete -n volcano-system job volcano-admission-init: fork/exec /usr/local/bin/kubectl: exec format error (373.211µs)
addons_test.go:934: vcjob creation with kubectl --context minikube delete -n volcano-system job volcano-admission-init failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Non-zero exit: kubectl --context minikube create -f testdata/vcjob.yaml: fork/exec /usr/local/bin/kubectl: exec format error (257.882µs)
addons_test.go:940: vcjob creation with kubectl --context minikube create -f testdata/vcjob.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (289.021µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (426.598µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (448.505µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (403.189µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (429.348µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (393.042µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (383.209µs)
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context minikube get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (434.915µs)
addons_test.go:960: failed checking volcano: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-16 10:29:57.400437022 +0000 UTC m=+449.346306546
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.183563929s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40127               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:13.140706   14731 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:13.140813   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140821   14731 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:13.140825   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140993   14731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:23:13.141565   14731 out.go:352] Setting JSON to false
	I0916 10:23:13.142443   14731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":344,"bootTime":1726481849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:13.142536   14731 start.go:139] virtualization: kvm guest
	I0916 10:23:13.144838   14731 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:23:13.146162   14731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:13.146197   14731 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:13.146202   14731 notify.go:220] Checking for updates...
	I0916 10:23:13.148646   14731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:13.149886   14731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:13.151023   14731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:23:13.152258   14731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:13.153558   14731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:13.154983   14731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:13.165097   14731 out.go:177] * Using the none driver based on user configuration
	I0916 10:23:13.166355   14731 start.go:297] selected driver: none
	I0916 10:23:13.166366   14731 start.go:901] validating driver "none" against <nil>
	I0916 10:23:13.166376   14731 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:13.166401   14731 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:23:13.166708   14731 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:23:13.167363   14731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:13.167640   14731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:13.167685   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:13.167734   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:13.167744   14731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:13.167818   14731 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:13.169383   14731 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:23:13.171024   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.171056   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:13.171177   14731 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:13.171205   14731 start.go:364] duration metric: took 15.507µs to acquireMachinesLock for "minikube"
	I0916 10:23:13.171217   14731 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:13.171280   14731 start.go:125] createHost starting for "" (driver="none")
	I0916 10:23:13.173420   14731 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:23:13.174682   14731 exec_runner.go:51] Run: systemctl --version
	I0916 10:23:13.177006   14731 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:23:13.177034   14731 client.go:168] LocalClient.Create starting
	I0916 10:23:13.177131   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:23:13.177168   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177190   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177253   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:23:13.177275   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177285   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177573   14731 client.go:171] duration metric: took 533.456µs to LocalClient.Create
	I0916 10:23:13.177599   14731 start.go:167] duration metric: took 593.576µs to libmachine.API.Create "minikube"
	I0916 10:23:13.177608   14731 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:23:13.177642   14731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:13.177683   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:13.187236   14731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:13.187263   14731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:13.187275   14731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:13.189044   14731 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:23:13.190345   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:23:13.190401   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:23:13.190422   14731 start.go:296] duration metric: took 12.809081ms for postStartSetup
	I0916 10:23:13.191528   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.191738   14731 start.go:128] duration metric: took 20.449605ms to createHost
	I0916 10:23:13.191749   14731 start.go:83] releasing machines lock for "minikube", held for 20.535411ms
	I0916 10:23:13.192580   14731 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:13.192644   14731 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:23:13.194590   14731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:23:13.194649   14731 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:13.202734   14731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:23:13.202757   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.202792   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.202889   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.222327   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:13.230703   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:13.239020   14731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:13.239101   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:13.248805   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.257191   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:13.265887   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.274565   14731 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:13.283401   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:13.292383   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:13.300868   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:13.309031   14731 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:13.315780   14731 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:13.322874   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:13.538903   14731 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:23:13.606063   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.606117   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.606219   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.625810   14731 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:23:13.626697   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:23:13.634078   14731 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:23:13.634095   14731 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.634125   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.641943   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:23:13.642067   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube17162235 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.649525   14731 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:23:13.864371   14731 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:23:14.080198   14731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:23:14.080354   14731 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:23:14.080369   14731 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:23:14.080415   14731 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:23:14.088510   14731 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:23:14.088647   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258152288 /etc/docker/daemon.json
	I0916 10:23:14.096396   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:14.312903   14731 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:23:14.614492   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:23:14.624711   14731 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:23:14.641378   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:14.651444   14731 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:23:14.875541   14731 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:23:15.086384   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.300370   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:23:15.313951   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:15.324456   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.540454   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:23:15.606406   14731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:23:15.606476   14731 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:23:15.607900   14731 start.go:563] Will wait 60s for crictl version
	I0916 10:23:15.607956   14731 exec_runner.go:51] Run: which crictl
	I0916 10:23:15.608880   14731 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:23:15.638324   14731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:23:15.638393   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.658714   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.681662   14731 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:23:15.681764   14731 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:15.684836   14731 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:23:15.686171   14731 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:15.686280   14731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:15.686290   14731 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:23:15.686371   14731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:23:15.686410   14731 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:23:15.733026   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.733051   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:15.733070   14731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:15.733090   14731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:15.733254   14731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:15.733305   14731 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.741208   14731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:23:15.741251   14731 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.748963   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:23:15.748989   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:23:15.748971   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:23:15.749021   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:15.749048   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:23:15.749023   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:23:15.759703   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:23:15.804184   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000397322 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:23:15.808532   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3573748997 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:23:15.825059   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036820018 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:23:15.890865   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:15.899083   14731 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:23:15.899106   14731 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.899146   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.906895   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:23:15.907034   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686635375 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.914549   14731 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:23:15.914568   14731 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:23:15.914597   14731 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:23:15.921424   14731 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:15.921543   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube124460998 /lib/systemd/system/kubelet.service
	I0916 10:23:15.930481   14731 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:23:15.930611   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4089828324 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:23:15.938132   14731 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:15.939361   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:16.143380   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:16.158863   14731 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:23:16.158890   14731 certs.go:194] generating shared ca certs ...
	I0916 10:23:16.158911   14731 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.159062   14731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:23:16.159122   14731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:23:16.159135   14731 certs.go:256] generating profile certs ...
	I0916 10:23:16.159199   14731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:23:16.159225   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:23:16.405613   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:23:16.405642   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405782   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:23:16.405793   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405856   14731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:23:16.405870   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:23:16.569943   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:23:16.569971   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570095   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:23:16.570104   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570154   14731 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:23:16.570220   14731 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:23:16.570270   14731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:23:16.570283   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:23:16.840205   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:23:16.840238   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840383   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:23:16.840393   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840537   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:23:16.840569   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:16.840594   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:16.840624   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:16.841173   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:16.841296   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube746649098 /var/lib/minikube/certs/ca.crt
	I0916 10:23:16.850974   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:23:16.851102   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216583324 /var/lib/minikube/certs/ca.key
	I0916 10:23:16.859052   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:16.859162   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2429656602 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:23:16.867993   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:16.868122   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31356631 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:23:16.876316   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:23:16.876432   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2172809749 /var/lib/minikube/certs/apiserver.crt
	I0916 10:23:16.883937   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:16.884043   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3752504884 /var/lib/minikube/certs/apiserver.key
	I0916 10:23:16.891211   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:16.891348   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1611886685 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:23:16.898521   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:16.898630   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414896728 /var/lib/minikube/certs/proxy-client.key
	I0916 10:23:16.905794   14731 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:23:16.905813   14731 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.905843   14731 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.913039   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:16.913160   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817740740 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.920335   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:16.920430   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902791778 /var/lib/minikube/kubeconfig
	I0916 10:23:16.929199   14731 exec_runner.go:51] Run: openssl version
	I0916 10:23:16.931944   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:16.940176   14731 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941576   14731 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941622   14731 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.944402   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:16.952213   14731 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:16.953336   14731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:16.953373   14731 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:16.953468   14731 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:23:16.968833   14731 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:16.976751   14731 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:16.984440   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:17.005001   14731 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:17.013500   14731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:17.013523   14731 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:17.013559   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:17.021530   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:17.021577   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:17.029363   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:17.038339   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:17.038392   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:17.046433   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.055974   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:17.056021   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.064002   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:17.087369   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:17.087421   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:17.094700   14731 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:23:17.125739   14731 kubeadm.go:310] W0916 10:23:17.125617   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.126248   14731 kubeadm.go:310] W0916 10:23:17.126207   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.127875   14731 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:17.127925   14731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:17.218197   14731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:17.218241   14731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:17.218245   14731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:17.218250   14731 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:17.228659   14731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:17.231432   14731 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:17.231476   14731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:17.231492   14731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:17.409888   14731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:17.475990   14731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:17.539491   14731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:17.796104   14731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:18.073234   14731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:18.073357   14731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.366388   14731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:18.366499   14731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.555987   14731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:18.639688   14731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:18.710297   14731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:18.710445   14731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:19.161742   14731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:19.258436   14731 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:19.315076   14731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:19.572576   14731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:19.765615   14731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:19.766182   14731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:19.768469   14731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:19.770925   14731 out.go:235]   - Booting up control plane ...
	I0916 10:23:19.770956   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:19.770979   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:19.770988   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:19.791511   14731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:19.797034   14731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:19.797064   14731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:20.020707   14731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:20.020728   14731 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:20.522367   14731 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.615965ms
	I0916 10:23:20.522388   14731 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:24.524089   14731 kubeadm.go:310] [api-check] The API server is healthy after 4.001711526s
	I0916 10:23:24.534645   14731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:24.545508   14731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:24.561586   14731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:24.561610   14731 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:24.569540   14731 kubeadm.go:310] [bootstrap-token] Using token: 60y8iu.vk0rxdhc25utw4uo
	I0916 10:23:24.571078   14731 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:24.571112   14731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:24.575563   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:24.581879   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:24.584635   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:24.587409   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:24.589877   14731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:24.929369   14731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:25.351323   14731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:25.929753   14731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:25.930651   14731 kubeadm.go:310] 
	I0916 10:23:25.930669   14731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:25.930673   14731 kubeadm.go:310] 
	I0916 10:23:25.930677   14731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:25.930693   14731 kubeadm.go:310] 
	I0916 10:23:25.930705   14731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:25.930710   14731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:25.930713   14731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:25.930717   14731 kubeadm.go:310] 
	I0916 10:23:25.930721   14731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:25.930725   14731 kubeadm.go:310] 
	I0916 10:23:25.930730   14731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:25.930737   14731 kubeadm.go:310] 
	I0916 10:23:25.930742   14731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:25.930749   14731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:25.930753   14731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:25.930759   14731 kubeadm.go:310] 
	I0916 10:23:25.930763   14731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:25.930765   14731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:25.930768   14731 kubeadm.go:310] 
	I0916 10:23:25.930770   14731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930773   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:23:25.930778   14731 kubeadm.go:310] 	--control-plane 
	I0916 10:23:25.930781   14731 kubeadm.go:310] 
	I0916 10:23:25.930784   14731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:25.930791   14731 kubeadm.go:310] 
	I0916 10:23:25.930794   14731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930798   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:23:25.933502   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:25.933525   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:25.935106   14731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:23:25.936272   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:23:25.946405   14731 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:23:25.946528   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951121141 /etc/cni/net.d/1-k8s.conflist
	I0916 10:23:25.957597   14731 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:25.957652   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:25.957691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:23:25.966602   14731 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:26.024809   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:26.524979   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.025101   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.525561   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.024962   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.525631   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.025594   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.525691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.024918   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.524850   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.024821   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.098521   14731 kubeadm.go:1113] duration metric: took 5.140910239s to wait for elevateKubeSystemPrivileges
	I0916 10:23:31.098550   14731 kubeadm.go:394] duration metric: took 14.145180358s to StartCluster
	I0916 10:23:31.098572   14731 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.098640   14731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:31.099273   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.099484   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:31.099563   14731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:31.099694   14731 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:23:31.099713   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.099725   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:23:31.099724   14731 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 10:23:31.099749   14731 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 10:23:31.099762   14731 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 10:23:31.099777   14731 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 10:23:31.099788   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.099807   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100187   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100203   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100227   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100376   14731 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 10:23:31.100405   14731 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 10:23:31.100436   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100438   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100445   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100453   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100459   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100485   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100491   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100769   14731 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 10:23:31.100790   14731 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 10:23:31.100826   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.101070   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101090   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101123   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101267   14731 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 10:23:31.101295   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 10:23:31.101510   14731 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 10:23:31.101527   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101535   14731 mustload.go:65] Loading cluster: minikube
	I0916 10:23:31.101541   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101572   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101737   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.101867   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101887   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101919   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102148   14731 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 10:23:31.102169   14731 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 10:23:31.102195   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.102220   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102233   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102253   14731 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 10:23:31.102265   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102298   14731 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:31.102312   14731 out.go:177] * Configuring local host environment ...
	I0916 10:23:31.102789   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102801   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102825   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.103836   14731 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 10:23:31.103861   14731 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 10:23:31.103905   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104241   14731 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:23:31.104257   14731 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:23:31.104275   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104742   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104753   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104763   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104773   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104784   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104812   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104956   14731 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 10:23:31.102331   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104975   14731 addons.go:69] Setting registry=true in profile "minikube"
	I0916 10:23:31.104984   14731 addons.go:234] Setting addon registry=true in "minikube"
	I0916 10:23:31.105000   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.105157   14731 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 10:23:31.105184   14731 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 10:23:31.105213   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104967   14731 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 10:23:31.105323   14731 host.go:66] Checking if "minikube" exists ...
	W0916 10:23:31.106873   14731 out.go:270] * 
	W0916 10:23:31.106888   14731 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:23:31.106896   14731 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:23:31.106903   14731 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:23:31.106909   14731 out.go:270] * 
	W0916 10:23:31.106955   14731 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:23:31.106962   14731 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:23:31.106971   14731 out.go:270] * 
	W0916 10:23:31.106995   14731 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:23:31.107002   14731 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:23:31.107009   14731 out.go:270] * 
	W0916 10:23:31.107018   14731 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:23:31.107045   14731 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:31.107984   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.107997   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.108026   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.108454   14731 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:31.109770   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.109792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.109828   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.110054   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:31.124712   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.127087   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.128504   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.130104   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.138756   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.138792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.138831   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.139721   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.139749   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.139779   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.142090   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142122   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142129   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142151   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142345   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.156934   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.156999   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.158343   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.158400   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.160580   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.163820   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.169364   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.171885   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.171953   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.173802   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.173849   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.174374   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.174420   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176241   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.176292   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176846   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.185299   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.186516   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.186575   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.194708   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.194738   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.194977   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.195032   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199863   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199893   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199933   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199946   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.200834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.200854   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.201607   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.201750   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.205007   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.205028   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.205039   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.205094   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.206485   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.210587   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.212372   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.212395   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.213745   14731 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:31.214160   14731 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:23:31.214415   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.216499   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.216520   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.216547   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.217076   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.217112   14731 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:31.217909   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube143406645 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.218842   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.219226   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.219253   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.220512   14731 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:31.220867   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.221546   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.223173   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.221979   14731 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.223461   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:31.223768   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3150586776 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.225359   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.227613   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.227660   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.229063   14731 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:31.229334   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:31.230849   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.230883   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231177   14731 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:31.231657   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.231693   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234554   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231695   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234684   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.232274   14731 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:31.235888   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.236046   14731 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236071   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:31.236209   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107188705 /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236904   14731 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:31.238542   14731 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.238573   14731 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:31.238771   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2095578904 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.239882   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.240045   14731 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:31.244446   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.245954   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:31.246834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252064   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.246956   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.252578   14731 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.252624   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:31.246990   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252873   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247002   14731 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:31.253137   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube95020260 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.247038   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253167   14731 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:31.253286   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2405129530 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253617   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.253668   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.247061   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.253722   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247236   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:31.255868   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.255894   14731 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:31.255954   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.255976   14731 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:31.256002   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671809590 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.256098   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1236849984 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.257119   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.257771   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:31.259551   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259704   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259965   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.260128   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.260751   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.261489   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.261250   14731 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:31.261394   14731 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 10:23:31.262031   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.262778   14731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:31.262782   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.262800   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.262829   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.262833   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:31.264514   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264537   14731 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:23:31.264545   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264584   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264768   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:31.264924   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.264959   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:31.265088   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2364820269 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.266759   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.268033   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:31.268086   14731 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:31.269452   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:31.269500   14731 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:31.272346   14731 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272373   14731 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:31.272497   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754220183 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272890   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:31.275160   14731 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275188   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:31.275361   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2480903723 /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275532   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:31.277158   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277179   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:31.277664   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube478526718 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277859   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.277882   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:31.278022   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636867839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.290799   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.290835   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:31.291218   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3814086991 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.295428   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.295459   14731 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:31.295604   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3740101312 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.306392   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.306425   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.311213   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.311248   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:31.311424   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube747122049 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.312994   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.313036   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.317835   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.318230   14731 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:31.323578   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube338558244 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.341814   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.341846   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:31.341971   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323528791 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.342204   14731 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.342226   14731 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:31.342566   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.342625   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.342837   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.342890   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube292318438 /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.343078   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.343101   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:31.343219   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032243386 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.358435   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.358525   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358549   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:31.358693   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2881932452 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358881   14731 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:31.359009   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1282728706 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.359505   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.366545   14731 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.366587   14731 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:31.366713   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1171915216 /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.378664   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.378695   14731 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:31.378815   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473351497 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.380393   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.380417   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.382937   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.382966   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:31.383096   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2529455688 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.384304   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.384326   14731 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:31.384438   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881397 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.385231   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.385271   14731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385284   14731 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:23:31.385292   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385328   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.387805   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.387835   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:31.387939   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332358551 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.390197   14731 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.390227   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:31.390366   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46497832 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.397672   14731 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:31.397951   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3186992100 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.403599   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.403630   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:31.403754   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445986553 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.409076   14731 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.409115   14731 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:31.409283   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1651200957 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.415599   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.415621   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:31.415721   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918202348 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.417404   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.423447   14731 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423472   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:31.423586   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube419582909 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423765   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.423804   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.436943   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.438121   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.443433   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.443523   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:31.443757   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41635707 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.462088   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462127   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:31.462266   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805595243 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462657   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:31.462783   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160047024 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.464607   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.476223   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.479433   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.479463   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.482688   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.487583   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.490669   14731 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:31.492378   14731 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:31.493942   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.493975   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:31.494108   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281912972 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.499328   14731 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499357   14731 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:31.499374   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.499400   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:31.499487   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719508217 /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499527   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411641332 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.518103   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.577544   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.577588   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:31.577779   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601059446 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.583317   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.651738   14731 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.651774   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:31.653267   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1921119500 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.672720   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:31.786205   14731 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789214   14731 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:23:31.789238   14731 node_ready.go:38] duration metric: took 2.992874ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789249   14731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:31.802669   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:31.813190   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.813232   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:31.813392   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591024036 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.863589   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.965015   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.965162   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:31.966268   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974451214 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.977982   14731 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:32.088850   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.088892   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:32.089762   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434131392 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.191154   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.191186   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:32.191329   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332266551 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.242672   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.242725   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:32.243830   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2503739100 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.299481   14731 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:32.324442   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.403566   14731 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 10:23:32.489342   14731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 10:23:32.514409   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.096961786s)
	I0916 10:23:32.514451   14731 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 10:23:32.516449   14731 out.go:177] * Verifying registry addon...
	I0916 10:23:32.528963   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:23:32.532579   14731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:32.532675   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:32.570911   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.088181519s)
	I0916 10:23:32.907708   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.389561221s)
	I0916 10:23:32.966699   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.383338477s)
	I0916 10:23:33.052703   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.126489   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262849545s)
	I0916 10:23:33.178161   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713502331s)
	W0916 10:23:33.178208   14731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.178247   14731 retry.go:31] will retry after 159.834349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.338693   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:33.540389   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.809689   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:34.053876   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.539589   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.570200   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.231431807s)
	I0916 10:23:34.612191   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.252641903s)
	I0916 10:23:34.884849   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.560344146s)
	I0916 10:23:34.884890   14731 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:34.886878   14731 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:34.890123   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:34.895733   14731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:34.895758   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.033190   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.396363   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.534375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.895151   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.035637   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.308497   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:36.395655   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.533207   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.895449   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.033542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.395180   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.533433   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.895384   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.033538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.473613   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:23:38.473795   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398753053 /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.474692   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.484004   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:23:38.484134   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434783837 /var/lib/minikube/google_cloud_project
	I0916 10:23:38.494551   14731 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 10:23:38.494595   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:38.495054   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:38.495069   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:38.495094   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:38.511610   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:38.520861   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:38.520914   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:38.529401   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:38.529444   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:38.599469   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:38.599542   14731 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.600327   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.656167   14731 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:23:38.735860   14731 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:38.798815   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.798859   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:23:38.798995   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2626597480 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.808091   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:38.862000   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.862041   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:23:38.862151   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046341520 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.872893   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.872922   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:23:38.873036   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054254500 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.883326   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.894333   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.033277   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.262619   14731 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 10:23:39.264955   14731 out.go:177] * Verifying gcp-auth addon...
	I0916 10:23:39.266807   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:23:39.268717   14731 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:23:39.310878   14731 pod_ready.go:98] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310904   14731 pod_ready.go:82] duration metric: took 7.508146008s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	E0916 10:23:39.310915   14731 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310924   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:39.395512   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.532567   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.894633   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.033580   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.394602   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.533200   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.815447   14731 pod_ready.go:93] pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.815468   14731 pod_ready.go:82] duration metric: took 1.504536219s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.815477   14731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819153   14731 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.819171   14731 pod_ready.go:82] duration metric: took 3.688538ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819180   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822800   14731 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.822815   14731 pod_ready.go:82] duration metric: took 3.628798ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822823   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826537   14731 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.826556   14731 pod_ready.go:82] duration metric: took 3.726729ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826567   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.894014   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.906975   14731 pod_ready.go:93] pod "kube-proxy-gm7kv" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.906995   14731 pod_ready.go:82] duration metric: took 80.421296ms for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.907005   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.033182   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.307459   14731 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.307479   14731 pod_ready.go:82] duration metric: took 400.467827ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.307488   14731 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.394410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.532263   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.707267   14731 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.707293   14731 pod_ready.go:82] duration metric: took 399.79657ms for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.707305   14731 pod_ready.go:39] duration metric: took 9.918041839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:41.707331   14731 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:23:41.707469   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:41.727079   14731 api_server.go:72] duration metric: took 10.620002836s to wait for apiserver process to appear ...
	I0916 10:23:41.727105   14731 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:23:41.727130   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:41.731666   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:41.732551   14731 api_server.go:141] control plane version: v1.31.1
	I0916 10:23:41.732571   14731 api_server.go:131] duration metric: took 5.460229ms to wait for apiserver health ...
	I0916 10:23:41.732579   14731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:23:41.894027   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.998997   14731 system_pods.go:59] 17 kube-system pods found
	I0916 10:23:41.999033   14731 system_pods.go:61] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:41.999047   14731 system_pods.go:61] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:41.999057   14731 system_pods.go:61] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:41.999075   14731 system_pods.go:61] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:41.999087   14731 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:41.999092   14731 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:41.999096   14731 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:41.999099   14731 system_pods.go:61] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:41.999104   14731 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:41.999111   14731 system_pods.go:61] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:41.999114   14731 system_pods.go:61] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:41.999127   14731 system_pods.go:61] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:41.999138   14731 system_pods.go:61] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:41.999147   14731 system_pods.go:61] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999159   14731 system_pods.go:61] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999164   14731 system_pods.go:61] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:41.999175   14731 system_pods.go:61] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:41.999182   14731 system_pods.go:74] duration metric: took 266.598276ms to wait for pod list to return data ...
	I0916 10:23:41.999196   14731 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:23:42.032591   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.106881   14731 default_sa.go:45] found service account: "default"
	I0916 10:23:42.106907   14731 default_sa.go:55] duration metric: took 107.703967ms for default service account to be created ...
	I0916 10:23:42.106918   14731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:23:42.375306   14731 system_pods.go:86] 17 kube-system pods found
	I0916 10:23:42.375339   14731 system_pods.go:89] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:42.375347   14731 system_pods.go:89] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:42.375355   14731 system_pods.go:89] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:42.375362   14731 system_pods.go:89] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:42.375367   14731 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:42.375372   14731 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:42.375377   14731 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:42.375382   14731 system_pods.go:89] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:42.375385   14731 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:42.375395   14731 system_pods.go:89] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:42.375400   14731 system_pods.go:89] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:42.375405   14731 system_pods.go:89] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:42.375411   14731 system_pods.go:89] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:42.375417   14731 system_pods.go:89] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375425   14731 system_pods.go:89] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375429   14731 system_pods.go:89] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:42.375435   14731 system_pods.go:89] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:42.375442   14731 system_pods.go:126] duration metric: took 268.518179ms to wait for k8s-apps to be running ...
	I0916 10:23:42.375451   14731 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:23:42.375494   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:42.387115   14731 system_svc.go:56] duration metric: took 11.655134ms WaitForService to wait for kubelet
	I0916 10:23:42.387140   14731 kubeadm.go:582] duration metric: took 11.2800718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:42.387171   14731 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:23:42.394773   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.507386   14731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:23:42.507413   14731 node_conditions.go:123] node cpu capacity is 8
	I0916 10:23:42.507426   14731 node_conditions.go:105] duration metric: took 120.250263ms to run NodePressure ...
	I0916 10:23:42.507440   14731 start.go:241] waiting for startup goroutines ...
	I0916 10:23:42.531600   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.894380   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.032814   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.393764   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.533097   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.895538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.033018   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.394939   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.532533   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.923857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.032464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.395518   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.532657   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.894621   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.033157   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.394820   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.533142   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.894150   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.032554   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.394103   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.532755   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.923101   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.032246   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.393952   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.531988   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.894443   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.032216   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.395492   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.532583   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.894398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.033134   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.394173   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.532730   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.895356   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.032410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.394499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.532834   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.894466   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.032976   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.393504   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.532575   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.895473   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.032897   14731 kapi.go:107] duration metric: took 20.503936091s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:53.395464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.897663   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.395912   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.895542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.394636   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.895289   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.394104   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.894685   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.394359   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.894369   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.394113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.895010   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.394765   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.895050   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.394699   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.893904   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.394519   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.893535   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.394889   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.894397   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.441082   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.893998   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.395141   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.895375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.395269   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.896063   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.394972   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.894856   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.395279   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.895293   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.394857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.896499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.394125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.895033   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.395202   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.894724   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.394201   14731 kapi.go:107] duration metric: took 36.504077115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:20.771019   14731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:20.771044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.269732   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.769379   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.270108   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.770020   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.270002   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.769993   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.270052   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.770494   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.270065   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.770030   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.269978   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.769822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.269485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.770749   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.270006   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.769786   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.269361   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.770193   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.270017   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.769639   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.269368   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.770132   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.270538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.770922   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.270016   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.770707   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.269925   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.770343   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.270669   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.770484   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.269981   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.770067   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.269913   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.769999   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.269695   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.769660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.270376   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.770125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.270113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.769635   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.269392   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.770622   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.270727   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.771121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.270788   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.779792   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.269641   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.771197   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.270296   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.770234   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.270660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.770461   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.270582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.770582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.269826   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.769427   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.270745   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.769804   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.270843   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.770187   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.270064   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.769562   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.270917   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.769965   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.270218   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.770822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.269777   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.770121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.269909   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.770485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.271044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.770398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.270401   14731 kapi.go:107] duration metric: took 1m18.003594843s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:24:57.272413   14731 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 10:24:57.273706   14731 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:24:57.274969   14731 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:24:57.276179   14731 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, metrics-server, helm-tiller, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, volcano, registry, csi-hostpath-driver, gcp-auth
	I0916 10:24:57.277503   14731 addons.go:510] duration metric: took 1m26.177945157s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd metrics-server helm-tiller storage-provisioner storage-provisioner-rancher inspektor-gadget volumesnapshots volcano registry csi-hostpath-driver gcp-auth]
	I0916 10:24:57.277539   14731 start.go:246] waiting for cluster config update ...
	I0916 10:24:57.277557   14731 start.go:255] writing updated cluster config ...
	I0916 10:24:57.277828   14731 exec_runner.go:51] Run: rm -f paused
	I0916 10:24:57.280918   14731 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:24:57.282289   14731 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:29:57 UTC. --
	Sep 16 10:24:43 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:43Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3"
	Sep 16 10:24:43 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:43Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3: Status: Image is up to date for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3"
	Sep 16 10:24:43 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:43.897766055Z" level=info msg="ignoring event" container=accdf3c09065b761e0a0a55c020962668a8d35289cbb96512db0e3168ad5d2a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:24:43 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:43.932682187Z" level=info msg="ignoring event" container=22f827d0c1a333d730892f830478e6ba303171c3d6e67c94d5b330cb9440044a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:24:44 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:44.797877851Z" level=info msg="ignoring event" container=6ad7641f51bab588d54118e74d951ba42d1ee0445bdd7b8e10822f13d3b97166 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:24:45 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:45.814587528Z" level=info msg="ignoring event" container=6975824ffd7f7c43e5d91641fa9df0b09af49a64d8016e20e43857c5aceaf1e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:24:46 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:46.840460434Z" level=info msg="ignoring event" container=762d0dcf119db3c85aa90d74317c987d9d5654760b6a9b80360bf6dc4577ad83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:24:52 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/872b837fda1bc3bc79246a006645323c59a1eacf48607943b30dbfc2ec8dbff6/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:24:52 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:52.643681603Z" level=warning msg="reference for unknown type: " digest="sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Sep 16 10:24:56 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Sep 16 10:24:56 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:56Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394894Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394785Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.923527826Z" level=error msg="Error running exec 40de4d4402a849a66630e4b3e224b5cac52a3344d4191ab61093c755f1eae2f9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:24:58 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:58.030336094Z" level=info msg="ignoring event" container=063696e8a73aabc89418d2c58e71706ba02ccbbecf8ff00cbae4ce69ab4d8dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:25:38 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:25:38Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:25:40 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:25:40.013070122Z" level=info msg="ignoring event" container=285e9d3bf61063164576db1e8b56067f2715f3125c65a408fb460b33df4e0df3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:27:12 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:27:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836428Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836085Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.785558764Z" level=error msg="Error running exec 13e088d02d0a5f22acc5e5b1a4471ba70b2f244b367260c945e607695da23676 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799299215Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799311411Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.801146259Z" level=error msg="Error running exec 8124ff9355b2b195f4666e956e5c04835c7ab5bbca41ab5f07f5d54c9a438e8a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.997546489Z" level=info msg="ignoring event" container=f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	f3640752ee05a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            2 minutes ago       Exited              gadget                                   5                   3902ec2c22c13       gadget-zt2b4
	b806437d39cb5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 5 minutes ago       Running             gcp-auth                                 0                   872b837fda1bc       gcp-auth-89d5ffd79-wt6q9
	6b6303f81cb52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          5 minutes ago       Running             csi-snapshotter                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d549f78521f57       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          5 minutes ago       Running             csi-provisioner                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	9125db73d99e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            5 minutes ago       Running             liveness-probe                           0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	87c37483d2112       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           5 minutes ago       Running             hostpath                                 0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	cd42401f74b1d       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         5 minutes ago       Running             admission                                0                   d5cc1eab65661       volcano-admission-77d7d48b68-t975d
	0c0ddb709904f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                5 minutes ago       Running             node-driver-registrar                    0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	b0782903176d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              5 minutes ago       Running             csi-resizer                              0                   fb9dfe220b3dc       csi-hostpath-resizer-0
	4edaa9f0351e1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             5 minutes ago       Running             csi-attacher                             0                   fa27205224e9f       csi-hostpath-attacher-0
	f0ce5f8efdc2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   5 minutes ago       Running             csi-external-health-monitor-controller   0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d35f343c48bcb       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               5 minutes ago       Running             volcano-scheduler                        0                   ca6d7d9980376       volcano-scheduler-576bc46687-l88qd
	3fa7892ed6588       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      5 minutes ago       Running             volcano-controllers                      0                   1d8c71b5408cc       volcano-controllers-56675bb4d5-kd2r2
	23bdeff0c7c03       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         5 minutes ago       Exited              main                                     0                   2684a290edfd1       volcano-admission-init-4rd4m
	a7c6ba8b5b8e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   2a9eff5290337       snapshot-controller-56fcc65765-c729p
	59e2e493c17f7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   a62d801d6adc1       snapshot-controller-56fcc65765-hhv7d
	c5ee33602669d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       6 minutes ago       Running             local-path-provisioner                   0                   6fcb08908435e       local-path-provisioner-86d989889c-xpx7m
	6dbe08ccc6f03       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              6 minutes ago       Running             registry-proxy                           0                   8a0796a6fd139       registry-proxy-qvvnb
	fe6d1bd912755       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  6 minutes ago       Running             tiller                                   0                   4cc0471023071       tiller-deploy-b48cc5f79-jhzqk
	bc6d19b424172       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             6 minutes ago       Running             registry                                 0                   bede25b8f44c4       registry-66c9cd494c-9ffzq
	c2bb3772d49b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        6 minutes ago       Running             yakd                                     0                   54361ea6661c2       yakd-dashboard-67d98fc6b-ggfmd
	1c9f6a3099faf       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        6 minutes ago       Running             metrics-server                           0                   1d5dec60ab67a       metrics-server-84c5f94fbc-wfrnf
	566744d15c91f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               6 minutes ago       Running             cloud-spanner-emulator                   0                   2ce78388a8512       cloud-spanner-emulator-769b77f747-7x6cj
	1cb6e9270416d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     6 minutes ago       Running             nvidia-device-plugin-ctr                 0                   6c5f84705a086       nvidia-device-plugin-daemonset-dcrh9
	e19218997c830       6e38f40d628db                                                                                                                                6 minutes ago       Running             storage-provisioner                      0                   debc24e02ca98       storage-provisioner
	e0a1b4e718aed       c69fa2e9cbf5f                                                                                                                                6 minutes ago       Running             coredns                                  0                   44104ce9decd6       coredns-7c65d6cfc9-vlmkz
	95dfe8f64bc6f       60c005f310ff3                                                                                                                                6 minutes ago       Running             kube-proxy                               0                   3eddba63436f7       kube-proxy-gm7kv
	236092569fa7f       2e96e5913fc06                                                                                                                                6 minutes ago       Running             etcd                                     0                   f4c192de28c8e       etcd-ubuntu-20-agent-2
	f656d4b3e221b       6bab7719df100                                                                                                                                6 minutes ago       Running             kube-apiserver                           0                   13c6d1481d7e3       kube-apiserver-ubuntu-20-agent-2
	abadc50dd44f1       175ffd71cce3d                                                                                                                                6 minutes ago       Running             kube-controller-manager                  0                   2dd1e926360a9       kube-controller-manager-ubuntu-20-agent-2
	0412032e5006c       9aa1fad941575                                                                                                                                6 minutes ago       Running             kube-scheduler                           0                   b7f61176a82d0       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [e0a1b4e718ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59960 - 9097 "HINFO IN 5932384522844147917.1993008146596938559. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018267326s
	[INFO] 10.244.0.24:39221 - 38983 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000387765s
	[INFO] 10.244.0.24:57453 - 43799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000481367s
	[INFO] 10.244.0.24:56558 - 1121 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126982s
	[INFO] 10.244.0.24:37367 - 64790 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137381s
	[INFO] 10.244.0.24:53874 - 61210 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129517s
	[INFO] 10.244.0.24:35488 - 47376 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167054s
	[INFO] 10.244.0.24:39756 - 34231 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003382584s
	[INFO] 10.244.0.24:42692 - 8269 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003496461s
	[INFO] 10.244.0.24:40495 - 49254 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00344128s
	[INFO] 10.244.0.24:54381 - 40672 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003513746s
	[INFO] 10.244.0.24:45458 - 51280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002837809s
	[INFO] 10.244.0.24:39080 - 48381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003158709s
	[INFO] 10.244.0.24:49164 - 30651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00123377s
	[INFO] 10.244.0.24:33687 - 1000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001779254s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:29:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7x6cj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  gadget                      gadget-zt2b4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  gcp-auth                    gcp-auth-89d5ffd79-wt6q9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 coredns-7c65d6cfc9-vlmkz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m28s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 csi-hostpathplugin-x6gtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m34s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-gm7kv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 metrics-server-84c5f94fbc-wfrnf              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m26s
	  kube-system                 nvidia-device-plugin-daemonset-dcrh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 registry-66c9cd494c-9ffzq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 registry-proxy-qvvnb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 snapshot-controller-56fcc65765-c729p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 snapshot-controller-56fcc65765-hhv7d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 tiller-deploy-b48cc5f79-jhzqk                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  local-path-storage          local-path-provisioner-86d989889c-xpx7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  volcano-system              volcano-admission-77d7d48b68-t975d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  volcano-system              volcano-controllers-56675bb4d5-kd2r2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  volcano-system              volcano-scheduler-576bc46687-l88qd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggfmd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m26s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m38s (x6 over 6m38s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m33s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m33s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m33s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m29s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 4f 68 84 7c 26 08 06
	[  +0.029810] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 4a d1 e3 09 35 08 06
	[  +2.541456] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 35 1c 77 2c 6a 08 06
	[Sep16 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 2e 0e e0 53 6a 08 06
	[  +1.979621] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	
	
	==> etcd [236092569fa7] <==
	{"level":"info","ts":"2024-09-16T10:23:22.168340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.168349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.168359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.169311Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.169894Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:23:22.169903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.169924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.170145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170298Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.171038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:23:22.172233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:23:34.396500Z","caller":"traceutil/trace.go:171","msg":"trace[1443924902] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"122.443714ms","start":"2024-09-16T10:23:34.274027Z","end":"2024-09-16T10:23:34.396470Z","steps":["trace[1443924902] 'process raft request'  (duration: 42.860188ms)","trace[1443924902] 'compare'  (duration: 79.401186ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:23:34.396568Z","caller":"traceutil/trace.go:171","msg":"trace[1914523289] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"119.254337ms","start":"2024-09-16T10:23:34.277291Z","end":"2024-09-16T10:23:34.396545Z","steps":["trace[1914523289] 'process raft request'  (duration: 119.164267ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396664Z","caller":"traceutil/trace.go:171","msg":"trace[551861205] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"121.694141ms","start":"2024-09-16T10:23:34.274951Z","end":"2024-09-16T10:23:34.396645Z","steps":["trace[551861205] 'process raft request'  (duration: 121.454274ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396765Z","caller":"traceutil/trace.go:171","msg":"trace[612276300] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"117.724007ms","start":"2024-09-16T10:23:34.279030Z","end":"2024-09-16T10:23:34.396754Z","steps":["trace[612276300] 'process raft request'  (duration: 117.466969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396775Z","caller":"traceutil/trace.go:171","msg":"trace[485760124] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"107.084096ms","start":"2024-09-16T10:23:34.289681Z","end":"2024-09-16T10:23:34.396765Z","steps":["trace[485760124] 'process raft request'  (duration: 106.857041ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396851Z","caller":"traceutil/trace.go:171","msg":"trace[655456638] linearizableReadLoop","detail":"{readStateIndex:770; appliedIndex:767; }","duration":"117.963693ms","start":"2024-09-16T10:23:34.278878Z","end":"2024-09-16T10:23:34.396842Z","steps":["trace[655456638] 'read index received'  (duration: 5.820633ms)","trace[655456638] 'applied index is now lower than readState.Index'  (duration: 112.141241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:23:34.396925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.026308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:23:34.396979Z","caller":"traceutil/trace.go:171","msg":"trace[1000991150] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate; range_end:; response_count:0; response_revision:752; }","duration":"118.092731ms","start":"2024-09-16T10:23:34.278875Z","end":"2024-09-16T10:23:34.396968Z","steps":["trace[1000991150] 'agreement among raft nodes before linearized reading'  (duration: 118.006643ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:38.471576Z","caller":"traceutil/trace.go:171","msg":"trace[1536302833] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"154.211147ms","start":"2024-09-16T10:23:38.317339Z","end":"2024-09-16T10:23:38.471550Z","steps":["trace[1536302833] 'process raft request'  (duration: 154.053853ms)"],"step_count":1}
	
	
	==> gcp-auth [b806437d39cb] <==
	2024/09/16 10:24:56 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:29:58 up 12 min,  0 users,  load average: 0.09, 0.27, 0.18
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f656d4b3e221] <==
	E0916 10:23:59.786707       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:23:59.788263       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:03.532842       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:04.623446       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:05.663512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:06.687369       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:07.741783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:08.796077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:09.892806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.278243       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.278280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.279887       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.290102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.290145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.291730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.911493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:11.942936       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:13.040622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:14.059340       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:20.272187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:20.272230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.287211       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.287254       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.296283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.296314       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [abadc50dd44f] <==
	I0916 10:24:42.302334       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:42.307286       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:42.310264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:42.312505       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:42.320196       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:44.683682       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:44.692525       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:45.715415       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:45.872836       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.737302       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:46.879053       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.884761       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:46.886340       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.889958       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:47.742623       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:47.749628       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:47.754341       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:56.917790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="5.791811ms"
	I0916 10:24:56.918045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="71.081µs"
	I0916 10:24:57.310368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:25:16.014611       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:16.035749       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:17.007655       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:17.024575       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:28.007825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-proxy [95dfe8f64bc6] <==
	I0916 10:23:31.205838       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:31.406402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:23:31.406455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:31.489030       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:31.489102       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:31.508985       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:31.509483       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:31.509513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:31.539926       1 config.go:199] "Starting service config controller"
	I0916 10:23:31.540054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:31.559259       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:31.559278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:31.559824       1 config.go:328] "Starting node config controller"
	I0916 10:23:31.559836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:31.641834       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:31.660551       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:23:31.660598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0412032e5006] <==
	W0916 10:23:23.040568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:23.040650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.040660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:23.040674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:23.040716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.040756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.848417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.848457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.947205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.947244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.963782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:23.963827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.018222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:23:24.018276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.056374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:24.056418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.187965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:24.188004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.200436       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:24.200484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:23:24.239846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:24.239894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:27.139487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:29:58 UTC. --
	Sep 16 10:27:14 ubuntu-20-agent-2 kubelet[16162]: E0916 10:27:14.726397   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:27:18 ubuntu-20-agent-2 kubelet[16162]: I0916 10:27:18.471601   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:27:18 ubuntu-20-agent-2 kubelet[16162]: E0916 10:27:18.471785   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:27:31 ubuntu-20-agent-2 kubelet[16162]: I0916 10:27:31.378337   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:27:31 ubuntu-20-agent-2 kubelet[16162]: E0916 10:27:31.378524   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:27:45 ubuntu-20-agent-2 kubelet[16162]: I0916 10:27:45.378445   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:27:45 ubuntu-20-agent-2 kubelet[16162]: E0916 10:27:45.378613   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:27:58 ubuntu-20-agent-2 kubelet[16162]: I0916 10:27:58.377900   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:27:58 ubuntu-20-agent-2 kubelet[16162]: E0916 10:27:58.378194   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:28:11 ubuntu-20-agent-2 kubelet[16162]: I0916 10:28:11.377748   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:28:11 ubuntu-20-agent-2 kubelet[16162]: E0916 10:28:11.377944   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:28:25 ubuntu-20-agent-2 kubelet[16162]: I0916 10:28:25.378703   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:28:25 ubuntu-20-agent-2 kubelet[16162]: E0916 10:28:25.378945   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:28:39 ubuntu-20-agent-2 kubelet[16162]: I0916 10:28:39.377834   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:28:39 ubuntu-20-agent-2 kubelet[16162]: E0916 10:28:39.378019   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:28:53 ubuntu-20-agent-2 kubelet[16162]: I0916 10:28:53.378127   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:28:53 ubuntu-20-agent-2 kubelet[16162]: E0916 10:28:53.378408   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:29:08 ubuntu-20-agent-2 kubelet[16162]: I0916 10:29:08.378334   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:29:08 ubuntu-20-agent-2 kubelet[16162]: E0916 10:29:08.378516   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:29:22 ubuntu-20-agent-2 kubelet[16162]: I0916 10:29:22.377893   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:29:22 ubuntu-20-agent-2 kubelet[16162]: E0916 10:29:22.378081   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:29:36 ubuntu-20-agent-2 kubelet[16162]: I0916 10:29:36.378197   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:29:36 ubuntu-20-agent-2 kubelet[16162]: E0916 10:29:36.378396   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:29:47 ubuntu-20-agent-2 kubelet[16162]: I0916 10:29:47.378242   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:29:47 ubuntu-20-agent-2 kubelet[16162]: E0916 10:29:47.378461   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	
	
	==> storage-provisioner [e19218997c83] <==
	I0916 10:23:33.807788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:23:33.819755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:23:33.821506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:23:33.836239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:23:33.837177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	I0916 10:23:33.840556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"272307eb-dbc1-400e-a5a3-6595c2b694d1", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407 became leader
	I0916 10:23:33.937802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (339.975µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/Volcano (301.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:656: (dbg) Non-zero exit: kubectl --context minikube create ns new-namespace: fork/exec /usr/local/bin/kubectl: exec format error (273.732µs)
addons_test.go:658: kubectl --context minikube create ns new-namespace failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.988753ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003618463s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004149726s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:342: (dbg) Non-zero exit: kubectl --context minikube delete po -l run=registry-test --now: fork/exec /usr/local/bin/kubectl: exec format error (365.206µs)
addons_test.go:344: pre-cleanup kubectl --context minikube delete po -l run=registry-test --now failed: fork/exec /usr/local/bin/kubectl: exec format error (not a problem)
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": fork/exec /usr/local/bin/kubectl: exec format error (324.335µs)
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/16 10:30:10 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40127               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:13.140706   14731 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:13.140813   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140821   14731 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:13.140825   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140993   14731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:23:13.141565   14731 out.go:352] Setting JSON to false
	I0916 10:23:13.142443   14731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":344,"bootTime":1726481849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:13.142536   14731 start.go:139] virtualization: kvm guest
	I0916 10:23:13.144838   14731 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:23:13.146162   14731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:13.146197   14731 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:13.146202   14731 notify.go:220] Checking for updates...
	I0916 10:23:13.148646   14731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:13.149886   14731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:13.151023   14731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:23:13.152258   14731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:13.153558   14731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:13.154983   14731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:13.165097   14731 out.go:177] * Using the none driver based on user configuration
	I0916 10:23:13.166355   14731 start.go:297] selected driver: none
	I0916 10:23:13.166366   14731 start.go:901] validating driver "none" against <nil>
	I0916 10:23:13.166376   14731 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:13.166401   14731 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:23:13.166708   14731 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:23:13.167363   14731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:13.167640   14731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:13.167685   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:13.167734   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:13.167744   14731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:13.167818   14731 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:13.169383   14731 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:23:13.171024   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.171056   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:13.171177   14731 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:13.171205   14731 start.go:364] duration metric: took 15.507µs to acquireMachinesLock for "minikube"
	I0916 10:23:13.171217   14731 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:13.171280   14731 start.go:125] createHost starting for "" (driver="none")
	I0916 10:23:13.173420   14731 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:23:13.174682   14731 exec_runner.go:51] Run: systemctl --version
	I0916 10:23:13.177006   14731 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:23:13.177034   14731 client.go:168] LocalClient.Create starting
	I0916 10:23:13.177131   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:23:13.177168   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177190   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177253   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:23:13.177275   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177285   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177573   14731 client.go:171] duration metric: took 533.456µs to LocalClient.Create
	I0916 10:23:13.177599   14731 start.go:167] duration metric: took 593.576µs to libmachine.API.Create "minikube"
	I0916 10:23:13.177608   14731 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:23:13.177642   14731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:13.177683   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:13.187236   14731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:13.187263   14731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:13.187275   14731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:13.189044   14731 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:23:13.190345   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:23:13.190401   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:23:13.190422   14731 start.go:296] duration metric: took 12.809081ms for postStartSetup
	I0916 10:23:13.191528   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.191738   14731 start.go:128] duration metric: took 20.449605ms to createHost
	I0916 10:23:13.191749   14731 start.go:83] releasing machines lock for "minikube", held for 20.535411ms
	I0916 10:23:13.192580   14731 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:13.192644   14731 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:23:13.194590   14731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:23:13.194649   14731 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:13.202734   14731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:23:13.202757   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.202792   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.202889   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.222327   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:13.230703   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:13.239020   14731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:13.239101   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:13.248805   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.257191   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:13.265887   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.274565   14731 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:13.283401   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:13.292383   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:13.300868   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:13.309031   14731 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:13.315780   14731 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:13.322874   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:13.538903   14731 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:23:13.606063   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.606117   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.606219   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.625810   14731 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:23:13.626697   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:23:13.634078   14731 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:23:13.634095   14731 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.634125   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.641943   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:23:13.642067   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube17162235 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.649525   14731 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:23:13.864371   14731 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:23:14.080198   14731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:23:14.080354   14731 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:23:14.080369   14731 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:23:14.080415   14731 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:23:14.088510   14731 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:23:14.088647   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258152288 /etc/docker/daemon.json
	I0916 10:23:14.096396   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:14.312903   14731 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:23:14.614492   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:23:14.624711   14731 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:23:14.641378   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:14.651444   14731 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:23:14.875541   14731 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:23:15.086384   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.300370   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:23:15.313951   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:15.324456   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.540454   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:23:15.606406   14731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:23:15.606476   14731 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:23:15.607900   14731 start.go:563] Will wait 60s for crictl version
	I0916 10:23:15.607956   14731 exec_runner.go:51] Run: which crictl
	I0916 10:23:15.608880   14731 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:23:15.638324   14731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:23:15.638393   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.658714   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.681662   14731 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:23:15.681764   14731 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:15.684836   14731 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:23:15.686171   14731 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:15.686280   14731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:15.686290   14731 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:23:15.686371   14731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:23:15.686410   14731 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:23:15.733026   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.733051   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:15.733070   14731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:15.733090   14731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:15.733254   14731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:15.733305   14731 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.741208   14731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:23:15.741251   14731 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.748963   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:23:15.748989   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:23:15.748971   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:23:15.749021   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:15.749048   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:23:15.749023   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:23:15.759703   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:23:15.804184   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000397322 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:23:15.808532   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3573748997 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:23:15.825059   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036820018 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:23:15.890865   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:15.899083   14731 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:23:15.899106   14731 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.899146   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.906895   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:23:15.907034   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686635375 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.914549   14731 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:23:15.914568   14731 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:23:15.914597   14731 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:23:15.921424   14731 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:15.921543   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube124460998 /lib/systemd/system/kubelet.service
	I0916 10:23:15.930481   14731 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:23:15.930611   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4089828324 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:23:15.938132   14731 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:15.939361   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:16.143380   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:16.158863   14731 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:23:16.158890   14731 certs.go:194] generating shared ca certs ...
	I0916 10:23:16.158911   14731 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.159062   14731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:23:16.159122   14731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:23:16.159135   14731 certs.go:256] generating profile certs ...
	I0916 10:23:16.159199   14731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:23:16.159225   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:23:16.405613   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:23:16.405642   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405782   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:23:16.405793   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405856   14731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:23:16.405870   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:23:16.569943   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:23:16.569971   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570095   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:23:16.570104   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570154   14731 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:23:16.570220   14731 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:23:16.570270   14731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:23:16.570283   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:23:16.840205   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:23:16.840238   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840383   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:23:16.840393   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840537   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:23:16.840569   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:16.840594   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:16.840624   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:16.841173   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:16.841296   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube746649098 /var/lib/minikube/certs/ca.crt
	I0916 10:23:16.850974   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:23:16.851102   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216583324 /var/lib/minikube/certs/ca.key
	I0916 10:23:16.859052   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:16.859162   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2429656602 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:23:16.867993   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:16.868122   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31356631 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:23:16.876316   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:23:16.876432   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2172809749 /var/lib/minikube/certs/apiserver.crt
	I0916 10:23:16.883937   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:16.884043   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3752504884 /var/lib/minikube/certs/apiserver.key
	I0916 10:23:16.891211   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:16.891348   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1611886685 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:23:16.898521   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:16.898630   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414896728 /var/lib/minikube/certs/proxy-client.key
	I0916 10:23:16.905794   14731 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:23:16.905813   14731 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.905843   14731 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.913039   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:16.913160   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817740740 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.920335   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:16.920430   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902791778 /var/lib/minikube/kubeconfig
	I0916 10:23:16.929199   14731 exec_runner.go:51] Run: openssl version
	I0916 10:23:16.931944   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:16.940176   14731 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941576   14731 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941622   14731 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.944402   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:16.952213   14731 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:16.953336   14731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:16.953373   14731 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:16.953468   14731 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:23:16.968833   14731 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:16.976751   14731 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:16.984440   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:17.005001   14731 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:17.013500   14731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:17.013523   14731 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:17.013559   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:17.021530   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:17.021577   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:17.029363   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:17.038339   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:17.038392   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:17.046433   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.055974   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:17.056021   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.064002   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:17.087369   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:17.087421   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:17.094700   14731 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:23:17.125739   14731 kubeadm.go:310] W0916 10:23:17.125617   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.126248   14731 kubeadm.go:310] W0916 10:23:17.126207   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.127875   14731 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:17.127925   14731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:17.218197   14731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:17.218241   14731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:17.218245   14731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:17.218250   14731 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:17.228659   14731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:17.231432   14731 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:17.231476   14731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:17.231492   14731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:17.409888   14731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:17.475990   14731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:17.539491   14731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:17.796104   14731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:18.073234   14731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:18.073357   14731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.366388   14731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:18.366499   14731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.555987   14731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:18.639688   14731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:18.710297   14731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:18.710445   14731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:19.161742   14731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:19.258436   14731 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:19.315076   14731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:19.572576   14731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:19.765615   14731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:19.766182   14731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:19.768469   14731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:19.770925   14731 out.go:235]   - Booting up control plane ...
	I0916 10:23:19.770956   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:19.770979   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:19.770988   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:19.791511   14731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:19.797034   14731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:19.797064   14731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:20.020707   14731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:20.020728   14731 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:20.522367   14731 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.615965ms
	I0916 10:23:20.522388   14731 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:24.524089   14731 kubeadm.go:310] [api-check] The API server is healthy after 4.001711526s
	I0916 10:23:24.534645   14731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:24.545508   14731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:24.561586   14731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:24.561610   14731 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:24.569540   14731 kubeadm.go:310] [bootstrap-token] Using token: 60y8iu.vk0rxdhc25utw4uo
	I0916 10:23:24.571078   14731 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:24.571112   14731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:24.575563   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:24.581879   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:24.584635   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:24.587409   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:24.589877   14731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:24.929369   14731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:25.351323   14731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:25.929753   14731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:25.930651   14731 kubeadm.go:310] 
	I0916 10:23:25.930669   14731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:25.930673   14731 kubeadm.go:310] 
	I0916 10:23:25.930677   14731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:25.930693   14731 kubeadm.go:310] 
	I0916 10:23:25.930705   14731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:25.930710   14731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:25.930713   14731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:25.930717   14731 kubeadm.go:310] 
	I0916 10:23:25.930721   14731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:25.930725   14731 kubeadm.go:310] 
	I0916 10:23:25.930730   14731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:25.930737   14731 kubeadm.go:310] 
	I0916 10:23:25.930742   14731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:25.930749   14731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:25.930753   14731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:25.930759   14731 kubeadm.go:310] 
	I0916 10:23:25.930763   14731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:25.930765   14731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:25.930768   14731 kubeadm.go:310] 
	I0916 10:23:25.930770   14731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930773   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:23:25.930778   14731 kubeadm.go:310] 	--control-plane 
	I0916 10:23:25.930781   14731 kubeadm.go:310] 
	I0916 10:23:25.930784   14731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:25.930791   14731 kubeadm.go:310] 
	I0916 10:23:25.930794   14731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930798   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:23:25.933502   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:25.933525   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:25.935106   14731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:23:25.936272   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:23:25.946405   14731 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:23:25.946528   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951121141 /etc/cni/net.d/1-k8s.conflist
	I0916 10:23:25.957597   14731 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:25.957652   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:25.957691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:23:25.966602   14731 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:26.024809   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:26.524979   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.025101   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.525561   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.024962   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.525631   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.025594   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.525691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.024918   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.524850   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.024821   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.098521   14731 kubeadm.go:1113] duration metric: took 5.140910239s to wait for elevateKubeSystemPrivileges
	I0916 10:23:31.098550   14731 kubeadm.go:394] duration metric: took 14.145180358s to StartCluster
	I0916 10:23:31.098572   14731 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.098640   14731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:31.099273   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.099484   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:31.099563   14731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:31.099694   14731 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:23:31.099713   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.099725   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:23:31.099724   14731 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 10:23:31.099749   14731 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 10:23:31.099762   14731 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 10:23:31.099777   14731 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 10:23:31.099788   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.099807   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100187   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100203   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100227   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100376   14731 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 10:23:31.100405   14731 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 10:23:31.100436   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100438   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100445   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100453   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100459   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100485   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100491   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100769   14731 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 10:23:31.100790   14731 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 10:23:31.100826   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.101070   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101090   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101123   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101267   14731 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 10:23:31.101295   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 10:23:31.101510   14731 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 10:23:31.101527   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101535   14731 mustload.go:65] Loading cluster: minikube
	I0916 10:23:31.101541   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101572   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101737   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.101867   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101887   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101919   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102148   14731 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 10:23:31.102169   14731 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 10:23:31.102195   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.102220   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102233   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102253   14731 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 10:23:31.102265   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102298   14731 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:31.102312   14731 out.go:177] * Configuring local host environment ...
	I0916 10:23:31.102789   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102801   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102825   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.103836   14731 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 10:23:31.103861   14731 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 10:23:31.103905   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104241   14731 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:23:31.104257   14731 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:23:31.104275   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104742   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104753   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104763   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104773   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104784   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104812   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104956   14731 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 10:23:31.102331   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104975   14731 addons.go:69] Setting registry=true in profile "minikube"
	I0916 10:23:31.104984   14731 addons.go:234] Setting addon registry=true in "minikube"
	I0916 10:23:31.105000   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.105157   14731 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 10:23:31.105184   14731 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 10:23:31.105213   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104967   14731 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 10:23:31.105323   14731 host.go:66] Checking if "minikube" exists ...
	W0916 10:23:31.106873   14731 out.go:270] * 
	W0916 10:23:31.106888   14731 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:23:31.106896   14731 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:23:31.106903   14731 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:23:31.106909   14731 out.go:270] * 
	W0916 10:23:31.106955   14731 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:23:31.106962   14731 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:23:31.106971   14731 out.go:270] * 
	W0916 10:23:31.106995   14731 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:23:31.107002   14731 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:23:31.107009   14731 out.go:270] * 
	W0916 10:23:31.107018   14731 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:23:31.107045   14731 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:31.107984   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.107997   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.108026   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.108454   14731 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:31.109770   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.109792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.109828   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.110054   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:31.124712   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.127087   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.128504   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.130104   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.138756   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.138792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.138831   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.139721   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.139749   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.139779   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.142090   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142122   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142129   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142151   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142345   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.156934   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.156999   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.158343   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.158400   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.160580   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.163820   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.169364   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.171885   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.171953   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.173802   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.173849   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.174374   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.174420   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176241   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.176292   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176846   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.185299   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.186516   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.186575   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.194708   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.194738   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.194977   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.195032   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199863   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199893   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199933   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199946   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.200834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.200854   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.201607   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.201750   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.205007   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.205028   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.205039   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.205094   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.206485   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.210587   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.212372   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.212395   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.213745   14731 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:31.214160   14731 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:23:31.214415   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.216499   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.216520   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.216547   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.217076   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.217112   14731 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:31.217909   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube143406645 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.218842   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.219226   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.219253   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.220512   14731 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:31.220867   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.221546   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.223173   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.221979   14731 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.223461   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:31.223768   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3150586776 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.225359   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.227613   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.227660   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.229063   14731 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:31.229334   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:31.230849   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.230883   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231177   14731 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:31.231657   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.231693   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234554   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231695   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234684   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.232274   14731 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:31.235888   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.236046   14731 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236071   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:31.236209   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107188705 /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236904   14731 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:31.238542   14731 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.238573   14731 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:31.238771   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2095578904 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.239882   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.240045   14731 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:31.244446   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.245954   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:31.246834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252064   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.246956   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.252578   14731 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.252624   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:31.246990   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252873   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247002   14731 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:31.253137   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube95020260 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.247038   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253167   14731 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:31.253286   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2405129530 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253617   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.253668   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.247061   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.253722   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247236   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:31.255868   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.255894   14731 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:31.255954   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.255976   14731 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:31.256002   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671809590 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.256098   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1236849984 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.257119   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.257771   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:31.259551   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259704   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259965   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.260128   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.260751   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.261489   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.261250   14731 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:31.261394   14731 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 10:23:31.262031   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.262778   14731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:31.262782   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.262800   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.262829   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.262833   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:31.264514   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264537   14731 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:23:31.264545   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264584   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264768   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:31.264924   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.264959   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:31.265088   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2364820269 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.266759   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.268033   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:31.268086   14731 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:31.269452   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:31.269500   14731 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:31.272346   14731 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272373   14731 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:31.272497   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754220183 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272890   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:31.275160   14731 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275188   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:31.275361   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2480903723 /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275532   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:31.277158   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277179   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:31.277664   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube478526718 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277859   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.277882   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:31.278022   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636867839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.290799   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.290835   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:31.291218   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3814086991 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.295428   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.295459   14731 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:31.295604   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3740101312 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.306392   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.306425   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.311213   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.311248   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:31.311424   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube747122049 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.312994   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.313036   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.317835   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.318230   14731 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:31.323578   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube338558244 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.341814   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.341846   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:31.341971   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323528791 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.342204   14731 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.342226   14731 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:31.342566   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.342625   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.342837   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.342890   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube292318438 /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.343078   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.343101   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:31.343219   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032243386 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.358435   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.358525   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358549   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:31.358693   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2881932452 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358881   14731 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:31.359009   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1282728706 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.359505   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.366545   14731 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.366587   14731 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:31.366713   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1171915216 /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.378664   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.378695   14731 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:31.378815   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473351497 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.380393   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.380417   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.382937   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.382966   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:31.383096   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2529455688 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.384304   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.384326   14731 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:31.384438   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881397 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.385231   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.385271   14731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385284   14731 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:23:31.385292   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385328   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.387805   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.387835   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:31.387939   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332358551 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.390197   14731 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.390227   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:31.390366   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46497832 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.397672   14731 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:31.397951   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3186992100 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.403599   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.403630   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:31.403754   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445986553 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.409076   14731 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.409115   14731 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:31.409283   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1651200957 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.415599   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.415621   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:31.415721   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918202348 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.417404   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.423447   14731 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423472   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:31.423586   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube419582909 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423765   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.423804   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.436943   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.438121   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.443433   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.443523   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:31.443757   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41635707 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.462088   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462127   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:31.462266   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805595243 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462657   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:31.462783   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160047024 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.464607   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.476223   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.479433   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.479463   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.482688   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.487583   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.490669   14731 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:31.492378   14731 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:31.493942   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.493975   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:31.494108   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281912972 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.499328   14731 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499357   14731 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:31.499374   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.499400   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:31.499487   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719508217 /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499527   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411641332 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.518103   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.577544   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.577588   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:31.577779   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601059446 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.583317   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.651738   14731 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.651774   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:31.653267   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1921119500 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.672720   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:31.786205   14731 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789214   14731 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:23:31.789238   14731 node_ready.go:38] duration metric: took 2.992874ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789249   14731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:31.802669   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:31.813190   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.813232   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:31.813392   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591024036 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.863589   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.965015   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.965162   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:31.966268   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974451214 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.977982   14731 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:32.088850   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.088892   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:32.089762   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434131392 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.191154   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.191186   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:32.191329   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332266551 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.242672   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.242725   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:32.243830   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2503739100 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.299481   14731 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:32.324442   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.403566   14731 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 10:23:32.489342   14731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 10:23:32.514409   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.096961786s)
	I0916 10:23:32.514451   14731 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 10:23:32.516449   14731 out.go:177] * Verifying registry addon...
	I0916 10:23:32.528963   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:23:32.532579   14731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:32.532675   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:32.570911   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.088181519s)
	I0916 10:23:32.907708   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.389561221s)
	I0916 10:23:32.966699   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.383338477s)
	I0916 10:23:33.052703   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.126489   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262849545s)
	I0916 10:23:33.178161   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713502331s)
	W0916 10:23:33.178208   14731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.178247   14731 retry.go:31] will retry after 159.834349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.338693   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:33.540389   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.809689   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:34.053876   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.539589   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.570200   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.231431807s)
	I0916 10:23:34.612191   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.252641903s)
	I0916 10:23:34.884849   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.560344146s)
	I0916 10:23:34.884890   14731 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:34.886878   14731 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:34.890123   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:34.895733   14731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:34.895758   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.033190   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.396363   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.534375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.895151   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.035637   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.308497   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:36.395655   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.533207   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.895449   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.033542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.395180   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.533433   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.895384   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.033538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.473613   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:23:38.473795   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398753053 /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.474692   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.484004   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:23:38.484134   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434783837 /var/lib/minikube/google_cloud_project
	I0916 10:23:38.494551   14731 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 10:23:38.494595   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:38.495054   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:38.495069   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:38.495094   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:38.511610   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:38.520861   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:38.520914   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:38.529401   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:38.529444   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:38.599469   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:38.599542   14731 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.600327   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.656167   14731 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:23:38.735860   14731 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:38.798815   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.798859   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:23:38.798995   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2626597480 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.808091   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:38.862000   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.862041   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:23:38.862151   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046341520 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.872893   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.872922   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:23:38.873036   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054254500 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.883326   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.894333   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.033277   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.262619   14731 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 10:23:39.264955   14731 out.go:177] * Verifying gcp-auth addon...
	I0916 10:23:39.266807   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:23:39.268717   14731 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:23:39.310878   14731 pod_ready.go:98] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310904   14731 pod_ready.go:82] duration metric: took 7.508146008s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	E0916 10:23:39.310915   14731 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310924   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:39.395512   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.532567   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.894633   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.033580   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.394602   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.533200   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.815447   14731 pod_ready.go:93] pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.815468   14731 pod_ready.go:82] duration metric: took 1.504536219s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.815477   14731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819153   14731 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.819171   14731 pod_ready.go:82] duration metric: took 3.688538ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819180   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822800   14731 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.822815   14731 pod_ready.go:82] duration metric: took 3.628798ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822823   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826537   14731 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.826556   14731 pod_ready.go:82] duration metric: took 3.726729ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826567   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.894014   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.906975   14731 pod_ready.go:93] pod "kube-proxy-gm7kv" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.906995   14731 pod_ready.go:82] duration metric: took 80.421296ms for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.907005   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.033182   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.307459   14731 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.307479   14731 pod_ready.go:82] duration metric: took 400.467827ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.307488   14731 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.394410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.532263   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.707267   14731 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.707293   14731 pod_ready.go:82] duration metric: took 399.79657ms for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.707305   14731 pod_ready.go:39] duration metric: took 9.918041839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:41.707331   14731 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:23:41.707469   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:41.727079   14731 api_server.go:72] duration metric: took 10.620002836s to wait for apiserver process to appear ...
	I0916 10:23:41.727105   14731 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:23:41.727130   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:41.731666   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:41.732551   14731 api_server.go:141] control plane version: v1.31.1
	I0916 10:23:41.732571   14731 api_server.go:131] duration metric: took 5.460229ms to wait for apiserver health ...
	I0916 10:23:41.732579   14731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:23:41.894027   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.998997   14731 system_pods.go:59] 17 kube-system pods found
	I0916 10:23:41.999033   14731 system_pods.go:61] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:41.999047   14731 system_pods.go:61] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:41.999057   14731 system_pods.go:61] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:41.999075   14731 system_pods.go:61] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:41.999087   14731 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:41.999092   14731 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:41.999096   14731 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:41.999099   14731 system_pods.go:61] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:41.999104   14731 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:41.999111   14731 system_pods.go:61] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:41.999114   14731 system_pods.go:61] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:41.999127   14731 system_pods.go:61] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:41.999138   14731 system_pods.go:61] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:41.999147   14731 system_pods.go:61] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999159   14731 system_pods.go:61] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999164   14731 system_pods.go:61] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:41.999175   14731 system_pods.go:61] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:41.999182   14731 system_pods.go:74] duration metric: took 266.598276ms to wait for pod list to return data ...
	I0916 10:23:41.999196   14731 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:23:42.032591   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.106881   14731 default_sa.go:45] found service account: "default"
	I0916 10:23:42.106907   14731 default_sa.go:55] duration metric: took 107.703967ms for default service account to be created ...
	I0916 10:23:42.106918   14731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:23:42.375306   14731 system_pods.go:86] 17 kube-system pods found
	I0916 10:23:42.375339   14731 system_pods.go:89] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:42.375347   14731 system_pods.go:89] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:42.375355   14731 system_pods.go:89] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:42.375362   14731 system_pods.go:89] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:42.375367   14731 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:42.375372   14731 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:42.375377   14731 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:42.375382   14731 system_pods.go:89] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:42.375385   14731 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:42.375395   14731 system_pods.go:89] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:42.375400   14731 system_pods.go:89] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:42.375405   14731 system_pods.go:89] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:42.375411   14731 system_pods.go:89] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:42.375417   14731 system_pods.go:89] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375425   14731 system_pods.go:89] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375429   14731 system_pods.go:89] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:42.375435   14731 system_pods.go:89] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:42.375442   14731 system_pods.go:126] duration metric: took 268.518179ms to wait for k8s-apps to be running ...
	I0916 10:23:42.375451   14731 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:23:42.375494   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:42.387115   14731 system_svc.go:56] duration metric: took 11.655134ms WaitForService to wait for kubelet
	I0916 10:23:42.387140   14731 kubeadm.go:582] duration metric: took 11.2800718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:42.387171   14731 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:23:42.394773   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.507386   14731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:23:42.507413   14731 node_conditions.go:123] node cpu capacity is 8
	I0916 10:23:42.507426   14731 node_conditions.go:105] duration metric: took 120.250263ms to run NodePressure ...
	I0916 10:23:42.507440   14731 start.go:241] waiting for startup goroutines ...
	I0916 10:23:42.531600   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.894380   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.032814   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.393764   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.533097   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.895538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.033018   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.394939   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.532533   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.923857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.032464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.395518   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.532657   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.894621   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.033157   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.394820   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.533142   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.894150   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.032554   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.394103   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.532755   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.923101   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.032246   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.393952   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.531988   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.894443   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.032216   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.395492   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.532583   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.894398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.033134   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.394173   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.532730   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.895356   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.032410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.394499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.532834   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.894466   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.032976   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.393504   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.532575   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.895473   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.032897   14731 kapi.go:107] duration metric: took 20.503936091s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:53.395464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.897663   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.395912   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.895542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.394636   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.895289   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.394104   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.894685   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.394359   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.894369   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.394113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.895010   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.394765   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.895050   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.394699   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.893904   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.394519   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.893535   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.394889   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.894397   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.441082   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.893998   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.395141   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.895375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.395269   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.896063   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.394972   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.894856   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.395279   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.895293   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.394857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.896499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.394125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.895033   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.395202   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.894724   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.394201   14731 kapi.go:107] duration metric: took 36.504077115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:20.771019   14731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:20.771044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.269732   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.769379   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.270108   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.770020   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.270002   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.769993   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.270052   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.770494   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.270065   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.770030   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.269978   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.769822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.269485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.770749   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.270006   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.769786   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.269361   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.770193   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.270017   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.769639   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.269368   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.770132   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.270538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.770922   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.270016   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.770707   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.269925   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.770343   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.270669   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.770484   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.269981   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.770067   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.269913   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.769999   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.269695   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.769660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.270376   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.770125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.270113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.769635   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.269392   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.770622   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.270727   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.771121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.270788   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.779792   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.269641   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.771197   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.270296   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.770234   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.270660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.770461   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.270582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.770582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.269826   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.769427   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.270745   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.769804   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.270843   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.770187   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.270064   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.769562   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.270917   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.769965   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.270218   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.770822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.269777   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.770121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.269909   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.770485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.271044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.770398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.270401   14731 kapi.go:107] duration metric: took 1m18.003594843s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:24:57.272413   14731 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 10:24:57.273706   14731 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:24:57.274969   14731 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:24:57.276179   14731 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, metrics-server, helm-tiller, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, volcano, registry, csi-hostpath-driver, gcp-auth
	I0916 10:24:57.277503   14731 addons.go:510] duration metric: took 1m26.177945157s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd metrics-server helm-tiller storage-provisioner storage-provisioner-rancher inspektor-gadget volumesnapshots volcano registry csi-hostpath-driver gcp-auth]
	I0916 10:24:57.277539   14731 start.go:246] waiting for cluster config update ...
	I0916 10:24:57.277557   14731 start.go:255] writing updated cluster config ...
	I0916 10:24:57.277828   14731 exec_runner.go:51] Run: rm -f paused
	I0916 10:24:57.280918   14731 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:24:57.282289   14731 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:30:10 UTC. --
	Sep 16 10:24:56 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Sep 16 10:24:56 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:56Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394894Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394785Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.923527826Z" level=error msg="Error running exec 40de4d4402a849a66630e4b3e224b5cac52a3344d4191ab61093c755f1eae2f9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:24:58 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:58.030336094Z" level=info msg="ignoring event" container=063696e8a73aabc89418d2c58e71706ba02ccbbecf8ff00cbae4ce69ab4d8dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:25:38 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:25:38Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:25:40 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:25:40.013070122Z" level=info msg="ignoring event" container=285e9d3bf61063164576db1e8b56067f2715f3125c65a408fb460b33df4e0df3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:27:12 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:27:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836428Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836085Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.785558764Z" level=error msg="Error running exec 13e088d02d0a5f22acc5e5b1a4471ba70b2f244b367260c945e607695da23676 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799299215Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799311411Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.801146259Z" level=error msg="Error running exec 8124ff9355b2b195f4666e956e5c04835c7ab5bbca41ab5f07f5d54c9a438e8a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.997546489Z" level=info msg="ignoring event" container=f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:01 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:30:01Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860094779Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860112359Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.861900754Z" level=error msg="Error running exec 7325b4844d467316c92c35912814ef76ffc52ab0706fc16a141d2d4c86eec807 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:03 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:03.053613980Z" level=info msg="ignoring event" container=f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.355786042Z" level=info msg="ignoring event" container=bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.422842358Z" level=info msg="ignoring event" container=6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.489977617Z" level=info msg="ignoring event" container=bede25b8f44c47a7583d31e5f552ceb2818b45bf9b6e66175cefd80b6e4a1ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.585848075Z" level=info msg="ignoring event" container=8a0796a6fd139e34146729f05330e8554afd338b598fd53c135d700704cea580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	f63dc6bb021d4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            9 seconds ago       Exited              gadget                                   6                   3902ec2c22c13       gadget-zt2b4
	b806437d39cb5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 5 minutes ago       Running             gcp-auth                                 0                   872b837fda1bc       gcp-auth-89d5ffd79-wt6q9
	6b6303f81cb52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d549f78521f57       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          6 minutes ago       Running             csi-provisioner                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	9125db73d99e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            6 minutes ago       Running             liveness-probe                           0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	87c37483d2112       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           6 minutes ago       Running             hostpath                                 0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	cd42401f74b1d       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         6 minutes ago       Running             admission                                0                   d5cc1eab65661       volcano-admission-77d7d48b68-t975d
	0c0ddb709904f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                6 minutes ago       Running             node-driver-registrar                    0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	b0782903176d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              6 minutes ago       Running             csi-resizer                              0                   fb9dfe220b3dc       csi-hostpath-resizer-0
	4edaa9f0351e1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             6 minutes ago       Running             csi-attacher                             0                   fa27205224e9f       csi-hostpath-attacher-0
	f0ce5f8efdc2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   6 minutes ago       Running             csi-external-health-monitor-controller   0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d35f343c48bcb       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               6 minutes ago       Running             volcano-scheduler                        0                   ca6d7d9980376       volcano-scheduler-576bc46687-l88qd
	3fa7892ed6588       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      6 minutes ago       Running             volcano-controllers                      0                   1d8c71b5408cc       volcano-controllers-56675bb4d5-kd2r2
	23bdeff0c7c03       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         6 minutes ago       Exited              main                                     0                   2684a290edfd1       volcano-admission-init-4rd4m
	a7c6ba8b5b8e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   2a9eff5290337       snapshot-controller-56fcc65765-c729p
	59e2e493c17f7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      6 minutes ago       Running             volume-snapshot-controller               0                   a62d801d6adc1       snapshot-controller-56fcc65765-hhv7d
	c5ee33602669d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       6 minutes ago       Running             local-path-provisioner                   0                   6fcb08908435e       local-path-provisioner-86d989889c-xpx7m
	6dbe08ccc6f03       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              6 minutes ago       Exited              registry-proxy                           0                   8a0796a6fd139       registry-proxy-qvvnb
	fe6d1bd912755       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  6 minutes ago       Running             tiller                                   0                   4cc0471023071       tiller-deploy-b48cc5f79-jhzqk
	bc6d19b424172       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             6 minutes ago       Exited              registry                                 0                   bede25b8f44c4       registry-66c9cd494c-9ffzq
	c2bb3772d49b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        6 minutes ago       Running             yakd                                     0                   54361ea6661c2       yakd-dashboard-67d98fc6b-ggfmd
	1c9f6a3099faf       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        6 minutes ago       Running             metrics-server                           0                   1d5dec60ab67a       metrics-server-84c5f94fbc-wfrnf
	566744d15c91f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               6 minutes ago       Running             cloud-spanner-emulator                   0                   2ce78388a8512       cloud-spanner-emulator-769b77f747-7x6cj
	1cb6e9270416d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     6 minutes ago       Running             nvidia-device-plugin-ctr                 0                   6c5f84705a086       nvidia-device-plugin-daemonset-dcrh9
	e19218997c830       6e38f40d628db                                                                                                                                6 minutes ago       Running             storage-provisioner                      0                   debc24e02ca98       storage-provisioner
	e0a1b4e718aed       c69fa2e9cbf5f                                                                                                                                6 minutes ago       Running             coredns                                  0                   44104ce9decd6       coredns-7c65d6cfc9-vlmkz
	95dfe8f64bc6f       60c005f310ff3                                                                                                                                6 minutes ago       Running             kube-proxy                               0                   3eddba63436f7       kube-proxy-gm7kv
	236092569fa7f       2e96e5913fc06                                                                                                                                6 minutes ago       Running             etcd                                     0                   f4c192de28c8e       etcd-ubuntu-20-agent-2
	f656d4b3e221b       6bab7719df100                                                                                                                                6 minutes ago       Running             kube-apiserver                           0                   13c6d1481d7e3       kube-apiserver-ubuntu-20-agent-2
	abadc50dd44f1       175ffd71cce3d                                                                                                                                6 minutes ago       Running             kube-controller-manager                  0                   2dd1e926360a9       kube-controller-manager-ubuntu-20-agent-2
	0412032e5006c       9aa1fad941575                                                                                                                                6 minutes ago       Running             kube-scheduler                           0                   b7f61176a82d0       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [e0a1b4e718ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59960 - 9097 "HINFO IN 5932384522844147917.1993008146596938559. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018267326s
	[INFO] 10.244.0.24:39221 - 38983 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000387765s
	[INFO] 10.244.0.24:57453 - 43799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000481367s
	[INFO] 10.244.0.24:56558 - 1121 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126982s
	[INFO] 10.244.0.24:37367 - 64790 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137381s
	[INFO] 10.244.0.24:53874 - 61210 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129517s
	[INFO] 10.244.0.24:35488 - 47376 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167054s
	[INFO] 10.244.0.24:39756 - 34231 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003382584s
	[INFO] 10.244.0.24:42692 - 8269 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003496461s
	[INFO] 10.244.0.24:40495 - 49254 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00344128s
	[INFO] 10.244.0.24:54381 - 40672 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003513746s
	[INFO] 10.244.0.24:45458 - 51280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002837809s
	[INFO] 10.244.0.24:39080 - 48381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003158709s
	[INFO] 10.244.0.24:49164 - 30651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00123377s
	[INFO] 10.244.0.24:33687 - 1000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001779254s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:30:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:25:27 +0000   Mon, 16 Sep 2024 10:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7x6cj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  gadget                      gadget-zt2b4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  gcp-auth                    gcp-auth-89d5ffd79-wt6q9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 coredns-7c65d6cfc9-vlmkz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m41s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 csi-hostpathplugin-x6gtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m47s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-proxy-gm7kv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 metrics-server-84c5f94fbc-wfrnf              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m39s
	  kube-system                 nvidia-device-plugin-daemonset-dcrh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 snapshot-controller-56fcc65765-c729p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 snapshot-controller-56fcc65765-hhv7d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 tiller-deploy-b48cc5f79-jhzqk                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  local-path-storage          local-path-provisioner-86d989889c-xpx7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  volcano-system              volcano-admission-77d7d48b68-t975d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  volcano-system              volcano-controllers-56675bb4d5-kd2r2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  volcano-system              volcano-scheduler-576bc46687-l88qd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggfmd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m51s (x6 over 6m51s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m46s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m46s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m46s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m46s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m42s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 4f 68 84 7c 26 08 06
	[  +0.029810] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 4a d1 e3 09 35 08 06
	[  +2.541456] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 35 1c 77 2c 6a 08 06
	[Sep16 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 2e 0e e0 53 6a 08 06
	[  +1.979621] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	
	
	==> etcd [236092569fa7] <==
	{"level":"info","ts":"2024-09-16T10:23:22.168340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.168349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.168359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:23:22.169311Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.169894Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:23:22.169903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.169924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.170145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170298Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.171038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:23:22.172233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:23:34.396500Z","caller":"traceutil/trace.go:171","msg":"trace[1443924902] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"122.443714ms","start":"2024-09-16T10:23:34.274027Z","end":"2024-09-16T10:23:34.396470Z","steps":["trace[1443924902] 'process raft request'  (duration: 42.860188ms)","trace[1443924902] 'compare'  (duration: 79.401186ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:23:34.396568Z","caller":"traceutil/trace.go:171","msg":"trace[1914523289] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"119.254337ms","start":"2024-09-16T10:23:34.277291Z","end":"2024-09-16T10:23:34.396545Z","steps":["trace[1914523289] 'process raft request'  (duration: 119.164267ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396664Z","caller":"traceutil/trace.go:171","msg":"trace[551861205] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"121.694141ms","start":"2024-09-16T10:23:34.274951Z","end":"2024-09-16T10:23:34.396645Z","steps":["trace[551861205] 'process raft request'  (duration: 121.454274ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396765Z","caller":"traceutil/trace.go:171","msg":"trace[612276300] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"117.724007ms","start":"2024-09-16T10:23:34.279030Z","end":"2024-09-16T10:23:34.396754Z","steps":["trace[612276300] 'process raft request'  (duration: 117.466969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396775Z","caller":"traceutil/trace.go:171","msg":"trace[485760124] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"107.084096ms","start":"2024-09-16T10:23:34.289681Z","end":"2024-09-16T10:23:34.396765Z","steps":["trace[485760124] 'process raft request'  (duration: 106.857041ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396851Z","caller":"traceutil/trace.go:171","msg":"trace[655456638] linearizableReadLoop","detail":"{readStateIndex:770; appliedIndex:767; }","duration":"117.963693ms","start":"2024-09-16T10:23:34.278878Z","end":"2024-09-16T10:23:34.396842Z","steps":["trace[655456638] 'read index received'  (duration: 5.820633ms)","trace[655456638] 'applied index is now lower than readState.Index'  (duration: 112.141241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:23:34.396925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.026308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:23:34.396979Z","caller":"traceutil/trace.go:171","msg":"trace[1000991150] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate; range_end:; response_count:0; response_revision:752; }","duration":"118.092731ms","start":"2024-09-16T10:23:34.278875Z","end":"2024-09-16T10:23:34.396968Z","steps":["trace[1000991150] 'agreement among raft nodes before linearized reading'  (duration: 118.006643ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:38.471576Z","caller":"traceutil/trace.go:171","msg":"trace[1536302833] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"154.211147ms","start":"2024-09-16T10:23:38.317339Z","end":"2024-09-16T10:23:38.471550Z","steps":["trace[1536302833] 'process raft request'  (duration: 154.053853ms)"],"step_count":1}
	
	
	==> gcp-auth [b806437d39cb] <==
	2024/09/16 10:24:56 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:30:11 up 12 min,  0 users,  load average: 0.95, 0.44, 0.24
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f656d4b3e221] <==
	E0916 10:23:59.786707       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:23:59.788263       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:03.532842       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:04.623446       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:05.663512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:06.687369       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:07.741783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:08.796077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:09.892806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.278243       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.278280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.279887       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.290102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.290145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.291730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.911493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:11.942936       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:13.040622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:14.059340       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:20.272187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:20.272230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.287211       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.287254       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.296283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.296314       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [abadc50dd44f] <==
	I0916 10:24:42.307286       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:42.310264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:42.312505       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:42.320196       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:44.683682       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:44.692525       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:45.715415       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:45.872836       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.737302       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:46.879053       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.884761       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:46.886340       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:46.889958       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:24:47.742623       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:47.749628       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:47.754341       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:24:56.917790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="5.791811ms"
	I0916 10:24:56.918045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="71.081µs"
	I0916 10:24:57.310368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:25:16.014611       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:16.035749       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:25:17.007655       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:17.024575       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:25:28.007825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:30:10.319802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.357µs"
	
	
	==> kube-proxy [95dfe8f64bc6] <==
	I0916 10:23:31.205838       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:31.406402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:23:31.406455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:31.489030       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:31.489102       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:31.508985       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:31.509483       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:31.509513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:31.539926       1 config.go:199] "Starting service config controller"
	I0916 10:23:31.540054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:31.559259       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:31.559278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:31.559824       1 config.go:328] "Starting node config controller"
	I0916 10:23:31.559836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:31.641834       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:31.660551       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:23:31.660598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0412032e5006] <==
	W0916 10:23:23.040568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:23.040650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.040660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:23.040674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:23.040716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.040756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.848417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.848457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.947205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.947244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.963782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:23.963827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.018222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:23:24.018276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.056374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:24.056418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.187965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:24.188004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.200436       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:24.200484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:23:24.239846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:24.239894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:27.139487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:30:11 UTC. --
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.472879   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.473302   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.474194   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.474486   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.475348   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.475520   16162 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = container not running (f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f)" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:03.870063   16162 scope.go:117] "RemoveContainer" containerID="f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9"
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:03.870526   16162 scope.go:117] "RemoveContainer" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f"
	Sep 16 10:30:03 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:03.870757   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:30:08 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:08.471934   16162 scope.go:117] "RemoveContainer" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f"
	Sep 16 10:30:08 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:08.472179   16162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zt2b4_gadget(c0a97873-e0c3-41a1-af0b-2ece8d95b20a)\"" pod="gadget/gadget-zt2b4" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a"
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.707699   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm7bx\" (UniqueName: \"kubernetes.io/projected/6713b497-3d64-4b59-8553-56cccb541c50-kube-api-access-mm7bx\") pod \"6713b497-3d64-4b59-8553-56cccb541c50\" (UID: \"6713b497-3d64-4b59-8553-56cccb541c50\") "
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.709442   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6713b497-3d64-4b59-8553-56cccb541c50-kube-api-access-mm7bx" (OuterVolumeSpecName: "kube-api-access-mm7bx") pod "6713b497-3d64-4b59-8553-56cccb541c50" (UID: "6713b497-3d64-4b59-8553-56cccb541c50"). InnerVolumeSpecName "kube-api-access-mm7bx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.808661   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8t5s\" (UniqueName: \"kubernetes.io/projected/6b3bd156-0501-41a1-8285-865292e17bd7-kube-api-access-k8t5s\") pod \"6b3bd156-0501-41a1-8285-865292e17bd7\" (UID: \"6b3bd156-0501-41a1-8285-865292e17bd7\") "
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.808752   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mm7bx\" (UniqueName: \"kubernetes.io/projected/6713b497-3d64-4b59-8553-56cccb541c50-kube-api-access-mm7bx\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.810579   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3bd156-0501-41a1-8285-865292e17bd7-kube-api-access-k8t5s" (OuterVolumeSpecName: "kube-api-access-k8t5s") pod "6b3bd156-0501-41a1-8285-865292e17bd7" (UID: "6b3bd156-0501-41a1-8285-865292e17bd7"). InnerVolumeSpecName "kube-api-access-k8t5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.909476   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k8t5s\" (UniqueName: \"kubernetes.io/projected/6b3bd156-0501-41a1-8285-865292e17bd7-kube-api-access-k8t5s\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.967358   16162 scope.go:117] "RemoveContainer" containerID="bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9"
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.986310   16162 scope.go:117] "RemoveContainer" containerID="bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9"
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:10.987172   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9" containerID="bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9"
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.987214   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9"} err="failed to get container status \"bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9\": rpc error: code = Unknown desc = Error response from daemon: No such container: bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9"
	Sep 16 10:30:10 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:10.987237   16162 scope.go:117] "RemoveContainer" containerID="6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295"
	Sep 16 10:30:11 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:11.003488   16162 scope.go:117] "RemoveContainer" containerID="6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295"
	Sep 16 10:30:11 ubuntu-20-agent-2 kubelet[16162]: E0916 10:30:11.004250   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295" containerID="6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295"
	Sep 16 10:30:11 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:11.004297   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295"} err="failed to get container status \"6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295"
	
	
	==> storage-provisioner [e19218997c83] <==
	I0916 10:23:33.807788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:23:33.819755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:23:33.821506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:23:33.836239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:23:33.837177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	I0916 10:23:33.840556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"272307eb-dbc1-400e-a5a3-6595c2b694d1", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407 became leader
	I0916 10:23:33.937802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (350.583µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Registry (11.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.315072ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003540155s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (395.128µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (383.832µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (445.223µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (379.22µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (431.365µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (451.969µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (379.432µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (370.792µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (406.101µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (481.79µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (432.397µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (396.652µs)
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (377.904µs)
addons_test.go:431: failed checking metric server: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40127               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:13.140706   14731 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:13.140813   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140821   14731 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:13.140825   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140993   14731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:23:13.141565   14731 out.go:352] Setting JSON to false
	I0916 10:23:13.142443   14731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":344,"bootTime":1726481849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:13.142536   14731 start.go:139] virtualization: kvm guest
	I0916 10:23:13.144838   14731 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:23:13.146162   14731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:13.146197   14731 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:13.146202   14731 notify.go:220] Checking for updates...
	I0916 10:23:13.148646   14731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:13.149886   14731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:13.151023   14731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:23:13.152258   14731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:13.153558   14731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:13.154983   14731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:13.165097   14731 out.go:177] * Using the none driver based on user configuration
	I0916 10:23:13.166355   14731 start.go:297] selected driver: none
	I0916 10:23:13.166366   14731 start.go:901] validating driver "none" against <nil>
	I0916 10:23:13.166376   14731 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:13.166401   14731 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:23:13.166708   14731 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:23:13.167363   14731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:13.167640   14731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:13.167685   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:13.167734   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:13.167744   14731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:13.167818   14731 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:13.169383   14731 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:23:13.171024   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.171056   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:13.171177   14731 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:13.171205   14731 start.go:364] duration metric: took 15.507µs to acquireMachinesLock for "minikube"
	I0916 10:23:13.171217   14731 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:13.171280   14731 start.go:125] createHost starting for "" (driver="none")
	I0916 10:23:13.173420   14731 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:23:13.174682   14731 exec_runner.go:51] Run: systemctl --version
	I0916 10:23:13.177006   14731 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:23:13.177034   14731 client.go:168] LocalClient.Create starting
	I0916 10:23:13.177131   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:23:13.177168   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177190   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177253   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:23:13.177275   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177285   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177573   14731 client.go:171] duration metric: took 533.456µs to LocalClient.Create
	I0916 10:23:13.177599   14731 start.go:167] duration metric: took 593.576µs to libmachine.API.Create "minikube"
	I0916 10:23:13.177608   14731 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:23:13.177642   14731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:13.177683   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:13.187236   14731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:13.187263   14731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:13.187275   14731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:13.189044   14731 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:23:13.190345   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:23:13.190401   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:23:13.190422   14731 start.go:296] duration metric: took 12.809081ms for postStartSetup
	I0916 10:23:13.191528   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.191738   14731 start.go:128] duration metric: took 20.449605ms to createHost
	I0916 10:23:13.191749   14731 start.go:83] releasing machines lock for "minikube", held for 20.535411ms
	I0916 10:23:13.192580   14731 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:13.192644   14731 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:23:13.194590   14731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:23:13.194649   14731 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:13.202734   14731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:23:13.202757   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.202792   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.202889   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.222327   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:13.230703   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:13.239020   14731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:13.239101   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:13.248805   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.257191   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:13.265887   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.274565   14731 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:13.283401   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:13.292383   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:13.300868   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:13.309031   14731 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:13.315780   14731 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:13.322874   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:13.538903   14731 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:23:13.606063   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.606117   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.606219   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.625810   14731 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:23:13.626697   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:23:13.634078   14731 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:23:13.634095   14731 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.634125   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.641943   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:23:13.642067   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube17162235 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.649525   14731 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:23:13.864371   14731 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:23:14.080198   14731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:23:14.080354   14731 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:23:14.080369   14731 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:23:14.080415   14731 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:23:14.088510   14731 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:23:14.088647   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258152288 /etc/docker/daemon.json
	I0916 10:23:14.096396   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:14.312903   14731 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:23:14.614492   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:23:14.624711   14731 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:23:14.641378   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:14.651444   14731 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:23:14.875541   14731 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:23:15.086384   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.300370   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:23:15.313951   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:15.324456   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.540454   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:23:15.606406   14731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:23:15.606476   14731 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:23:15.607900   14731 start.go:563] Will wait 60s for crictl version
	I0916 10:23:15.607956   14731 exec_runner.go:51] Run: which crictl
	I0916 10:23:15.608880   14731 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:23:15.638324   14731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:23:15.638393   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.658714   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.681662   14731 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:23:15.681764   14731 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:15.684836   14731 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:23:15.686171   14731 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:15.686280   14731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:15.686290   14731 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:23:15.686371   14731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:23:15.686410   14731 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:23:15.733026   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.733051   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:15.733070   14731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:15.733090   14731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:15.733254   14731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:15.733305   14731 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.741208   14731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:23:15.741251   14731 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.748963   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:23:15.748989   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:23:15.748971   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:23:15.749021   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:15.749048   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:23:15.749023   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:23:15.759703   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:23:15.804184   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000397322 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:23:15.808532   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3573748997 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:23:15.825059   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036820018 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:23:15.890865   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:15.899083   14731 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:23:15.899106   14731 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.899146   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.906895   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:23:15.907034   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686635375 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.914549   14731 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:23:15.914568   14731 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:23:15.914597   14731 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:23:15.921424   14731 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:15.921543   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube124460998 /lib/systemd/system/kubelet.service
	I0916 10:23:15.930481   14731 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:23:15.930611   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4089828324 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:23:15.938132   14731 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:15.939361   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:16.143380   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:16.158863   14731 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:23:16.158890   14731 certs.go:194] generating shared ca certs ...
	I0916 10:23:16.158911   14731 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.159062   14731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:23:16.159122   14731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:23:16.159135   14731 certs.go:256] generating profile certs ...
	I0916 10:23:16.159199   14731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:23:16.159225   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:23:16.405613   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:23:16.405642   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405782   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:23:16.405793   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405856   14731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:23:16.405870   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:23:16.569943   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:23:16.569971   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570095   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:23:16.570104   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570154   14731 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:23:16.570220   14731 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:23:16.570270   14731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:23:16.570283   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:23:16.840205   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:23:16.840238   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840383   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:23:16.840393   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840537   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:23:16.840569   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:16.840594   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:16.840624   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:16.841173   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:16.841296   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube746649098 /var/lib/minikube/certs/ca.crt
	I0916 10:23:16.850974   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:23:16.851102   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216583324 /var/lib/minikube/certs/ca.key
	I0916 10:23:16.859052   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:16.859162   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2429656602 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:23:16.867993   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:16.868122   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31356631 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:23:16.876316   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:23:16.876432   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2172809749 /var/lib/minikube/certs/apiserver.crt
	I0916 10:23:16.883937   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:16.884043   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3752504884 /var/lib/minikube/certs/apiserver.key
	I0916 10:23:16.891211   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:16.891348   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1611886685 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:23:16.898521   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:16.898630   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414896728 /var/lib/minikube/certs/proxy-client.key
	I0916 10:23:16.905794   14731 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:23:16.905813   14731 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.905843   14731 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.913039   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:16.913160   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817740740 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.920335   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:16.920430   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902791778 /var/lib/minikube/kubeconfig
	I0916 10:23:16.929199   14731 exec_runner.go:51] Run: openssl version
	I0916 10:23:16.931944   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:16.940176   14731 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941576   14731 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941622   14731 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.944402   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:16.952213   14731 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:16.953336   14731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:16.953373   14731 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:16.953468   14731 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:23:16.968833   14731 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:16.976751   14731 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:16.984440   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:17.005001   14731 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:17.013500   14731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:17.013523   14731 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:17.013559   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:17.021530   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:17.021577   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:17.029363   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:17.038339   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:17.038392   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:17.046433   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.055974   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:17.056021   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.064002   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:17.087369   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:17.087421   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:17.094700   14731 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:23:17.125739   14731 kubeadm.go:310] W0916 10:23:17.125617   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.126248   14731 kubeadm.go:310] W0916 10:23:17.126207   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.127875   14731 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:17.127925   14731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:17.218197   14731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:17.218241   14731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:17.218245   14731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:17.218250   14731 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:17.228659   14731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:17.231432   14731 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:17.231476   14731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:17.231492   14731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:17.409888   14731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:17.475990   14731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:17.539491   14731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:17.796104   14731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:18.073234   14731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:18.073357   14731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.366388   14731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:18.366499   14731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.555987   14731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:18.639688   14731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:18.710297   14731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:18.710445   14731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:19.161742   14731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:19.258436   14731 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:19.315076   14731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:19.572576   14731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:19.765615   14731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:19.766182   14731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:19.768469   14731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:19.770925   14731 out.go:235]   - Booting up control plane ...
	I0916 10:23:19.770956   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:19.770979   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:19.770988   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:19.791511   14731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:19.797034   14731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:19.797064   14731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:20.020707   14731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:20.020728   14731 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:20.522367   14731 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.615965ms
	I0916 10:23:20.522388   14731 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:24.524089   14731 kubeadm.go:310] [api-check] The API server is healthy after 4.001711526s
	I0916 10:23:24.534645   14731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:24.545508   14731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:24.561586   14731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:24.561610   14731 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:24.569540   14731 kubeadm.go:310] [bootstrap-token] Using token: 60y8iu.vk0rxdhc25utw4uo
	I0916 10:23:24.571078   14731 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:24.571112   14731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:24.575563   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:24.581879   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:24.584635   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:24.587409   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:24.589877   14731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:24.929369   14731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:25.351323   14731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:25.929753   14731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:25.930651   14731 kubeadm.go:310] 
	I0916 10:23:25.930669   14731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:25.930673   14731 kubeadm.go:310] 
	I0916 10:23:25.930677   14731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:25.930693   14731 kubeadm.go:310] 
	I0916 10:23:25.930705   14731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:25.930710   14731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:25.930713   14731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:25.930717   14731 kubeadm.go:310] 
	I0916 10:23:25.930721   14731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:25.930725   14731 kubeadm.go:310] 
	I0916 10:23:25.930730   14731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:25.930737   14731 kubeadm.go:310] 
	I0916 10:23:25.930742   14731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:25.930749   14731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:25.930753   14731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:25.930759   14731 kubeadm.go:310] 
	I0916 10:23:25.930763   14731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:25.930765   14731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:25.930768   14731 kubeadm.go:310] 
	I0916 10:23:25.930770   14731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930773   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:23:25.930778   14731 kubeadm.go:310] 	--control-plane 
	I0916 10:23:25.930781   14731 kubeadm.go:310] 
	I0916 10:23:25.930784   14731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:25.930791   14731 kubeadm.go:310] 
	I0916 10:23:25.930794   14731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930798   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:23:25.933502   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:25.933525   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:25.935106   14731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:23:25.936272   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:23:25.946405   14731 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:23:25.946528   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951121141 /etc/cni/net.d/1-k8s.conflist
	I0916 10:23:25.957597   14731 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:25.957652   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:25.957691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:23:25.966602   14731 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:26.024809   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:26.524979   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.025101   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.525561   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.024962   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.525631   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.025594   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.525691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.024918   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.524850   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.024821   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.098521   14731 kubeadm.go:1113] duration metric: took 5.140910239s to wait for elevateKubeSystemPrivileges
	I0916 10:23:31.098550   14731 kubeadm.go:394] duration metric: took 14.145180358s to StartCluster
	I0916 10:23:31.098572   14731 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.098640   14731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:31.099273   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.099484   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:31.099563   14731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:31.099694   14731 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:23:31.099713   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.099725   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:23:31.099724   14731 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 10:23:31.099749   14731 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 10:23:31.099762   14731 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 10:23:31.099777   14731 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 10:23:31.099788   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.099807   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100187   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100203   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100227   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100376   14731 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 10:23:31.100405   14731 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 10:23:31.100436   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100438   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100445   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100453   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100459   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100485   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100491   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100769   14731 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 10:23:31.100790   14731 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 10:23:31.100826   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.101070   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101090   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101123   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101267   14731 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 10:23:31.101295   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 10:23:31.101510   14731 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 10:23:31.101527   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101535   14731 mustload.go:65] Loading cluster: minikube
	I0916 10:23:31.101541   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101572   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101737   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.101867   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101887   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101919   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102148   14731 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 10:23:31.102169   14731 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 10:23:31.102195   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.102220   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102233   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102253   14731 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 10:23:31.102265   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102298   14731 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:31.102312   14731 out.go:177] * Configuring local host environment ...
	I0916 10:23:31.102789   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102801   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102825   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.103836   14731 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 10:23:31.103861   14731 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 10:23:31.103905   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104241   14731 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:23:31.104257   14731 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:23:31.104275   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104742   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104753   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104763   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104773   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104784   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104812   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104956   14731 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 10:23:31.102331   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104975   14731 addons.go:69] Setting registry=true in profile "minikube"
	I0916 10:23:31.104984   14731 addons.go:234] Setting addon registry=true in "minikube"
	I0916 10:23:31.105000   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.105157   14731 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 10:23:31.105184   14731 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 10:23:31.105213   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104967   14731 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 10:23:31.105323   14731 host.go:66] Checking if "minikube" exists ...
	W0916 10:23:31.106873   14731 out.go:270] * 
	W0916 10:23:31.106888   14731 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:23:31.106896   14731 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:23:31.106903   14731 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:23:31.106909   14731 out.go:270] * 
	W0916 10:23:31.106955   14731 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:23:31.106962   14731 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:23:31.106971   14731 out.go:270] * 
	W0916 10:23:31.106995   14731 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:23:31.107002   14731 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:23:31.107009   14731 out.go:270] * 
	W0916 10:23:31.107018   14731 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:23:31.107045   14731 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:31.107984   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.107997   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.108026   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.108454   14731 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:31.109770   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.109792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.109828   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.110054   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:31.124712   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.127087   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.128504   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.130104   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.138756   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.138792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.138831   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.139721   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.139749   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.139779   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.142090   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142122   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142129   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142151   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142345   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.156934   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.156999   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.158343   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.158400   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.160580   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.163820   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.169364   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.171885   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.171953   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.173802   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.173849   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.174374   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.174420   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176241   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.176292   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176846   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.185299   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.186516   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.186575   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.194708   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.194738   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.194977   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.195032   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199863   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199893   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199933   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199946   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.200834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.200854   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.201607   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.201750   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.205007   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.205028   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.205039   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.205094   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.206485   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.210587   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.212372   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.212395   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.213745   14731 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:31.214160   14731 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:23:31.214415   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.216499   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.216520   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.216547   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.217076   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.217112   14731 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:31.217909   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube143406645 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.218842   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.219226   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.219253   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.220512   14731 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:31.220867   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.221546   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.223173   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.221979   14731 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.223461   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:31.223768   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3150586776 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.225359   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.227613   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.227660   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.229063   14731 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:31.229334   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:31.230849   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.230883   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231177   14731 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:31.231657   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.231693   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234554   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231695   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234684   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.232274   14731 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:31.235888   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.236046   14731 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236071   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:31.236209   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107188705 /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236904   14731 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:31.238542   14731 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.238573   14731 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:31.238771   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2095578904 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.239882   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.240045   14731 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:31.244446   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.245954   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:31.246834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252064   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.246956   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.252578   14731 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.252624   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:31.246990   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252873   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247002   14731 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:31.253137   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube95020260 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.247038   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253167   14731 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:31.253286   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2405129530 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253617   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.253668   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.247061   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.253722   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247236   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:31.255868   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.255894   14731 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:31.255954   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.255976   14731 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:31.256002   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671809590 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.256098   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1236849984 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.257119   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.257771   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:31.259551   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259704   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259965   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.260128   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.260751   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.261489   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.261250   14731 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:31.261394   14731 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 10:23:31.262031   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.262778   14731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:31.262782   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.262800   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.262829   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.262833   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:31.264514   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264537   14731 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:23:31.264545   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264584   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264768   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:31.264924   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.264959   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:31.265088   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2364820269 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.266759   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.268033   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:31.268086   14731 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:31.269452   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:31.269500   14731 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:31.272346   14731 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272373   14731 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:31.272497   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754220183 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272890   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:31.275160   14731 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275188   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:31.275361   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2480903723 /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275532   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:31.277158   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277179   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:31.277664   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube478526718 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277859   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.277882   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:31.278022   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636867839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.290799   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.290835   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:31.291218   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3814086991 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.295428   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.295459   14731 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:31.295604   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3740101312 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.306392   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.306425   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.311213   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.311248   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:31.311424   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube747122049 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.312994   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.313036   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.317835   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.318230   14731 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:31.323578   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube338558244 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.341814   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.341846   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:31.341971   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323528791 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.342204   14731 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.342226   14731 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:31.342566   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.342625   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.342837   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.342890   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube292318438 /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.343078   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.343101   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:31.343219   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032243386 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.358435   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.358525   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358549   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:31.358693   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2881932452 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358881   14731 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:31.359009   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1282728706 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.359505   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.366545   14731 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.366587   14731 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:31.366713   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1171915216 /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.378664   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.378695   14731 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:31.378815   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473351497 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.380393   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.380417   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.382937   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.382966   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:31.383096   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2529455688 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.384304   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.384326   14731 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:31.384438   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881397 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.385231   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.385271   14731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385284   14731 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:23:31.385292   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385328   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.387805   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.387835   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:31.387939   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332358551 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.390197   14731 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.390227   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:31.390366   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46497832 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.397672   14731 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:31.397951   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3186992100 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.403599   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.403630   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:31.403754   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445986553 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.409076   14731 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.409115   14731 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:31.409283   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1651200957 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.415599   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.415621   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:31.415721   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918202348 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.417404   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.423447   14731 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423472   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:31.423586   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube419582909 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423765   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.423804   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.436943   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.438121   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.443433   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.443523   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:31.443757   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41635707 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.462088   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462127   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:31.462266   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805595243 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462657   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:31.462783   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160047024 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.464607   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.476223   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.479433   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.479463   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.482688   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.487583   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.490669   14731 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:31.492378   14731 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:31.493942   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.493975   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:31.494108   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281912972 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.499328   14731 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499357   14731 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:31.499374   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.499400   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:31.499487   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719508217 /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499527   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411641332 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.518103   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.577544   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.577588   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:31.577779   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601059446 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.583317   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.651738   14731 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.651774   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:31.653267   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1921119500 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.672720   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:31.786205   14731 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789214   14731 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:23:31.789238   14731 node_ready.go:38] duration metric: took 2.992874ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789249   14731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:31.802669   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:31.813190   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.813232   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:31.813392   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591024036 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.863589   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.965015   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.965162   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:31.966268   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974451214 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.977982   14731 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:32.088850   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.088892   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:32.089762   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434131392 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.191154   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.191186   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:32.191329   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332266551 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.242672   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.242725   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:32.243830   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2503739100 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.299481   14731 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:32.324442   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.403566   14731 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 10:23:32.489342   14731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 10:23:32.514409   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.096961786s)
	I0916 10:23:32.514451   14731 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 10:23:32.516449   14731 out.go:177] * Verifying registry addon...
	I0916 10:23:32.528963   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:23:32.532579   14731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:32.532675   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:32.570911   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.088181519s)
	I0916 10:23:32.907708   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.389561221s)
	I0916 10:23:32.966699   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.383338477s)
	I0916 10:23:33.052703   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.126489   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262849545s)
	I0916 10:23:33.178161   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713502331s)
	W0916 10:23:33.178208   14731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.178247   14731 retry.go:31] will retry after 159.834349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.338693   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:33.540389   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.809689   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:34.053876   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.539589   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.570200   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.231431807s)
	I0916 10:23:34.612191   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.252641903s)
	I0916 10:23:34.884849   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.560344146s)
	I0916 10:23:34.884890   14731 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:34.886878   14731 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:34.890123   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:34.895733   14731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:34.895758   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.033190   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.396363   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.534375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.895151   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.035637   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.308497   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:36.395655   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.533207   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.895449   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.033542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.395180   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.533433   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.895384   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.033538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.473613   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:23:38.473795   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398753053 /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.474692   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.484004   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:23:38.484134   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434783837 /var/lib/minikube/google_cloud_project
	I0916 10:23:38.494551   14731 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 10:23:38.494595   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:38.495054   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:38.495069   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:38.495094   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:38.511610   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:38.520861   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:38.520914   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:38.529401   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:38.529444   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:38.599469   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:38.599542   14731 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.600327   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.656167   14731 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:23:38.735860   14731 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:38.798815   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.798859   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:23:38.798995   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2626597480 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.808091   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:38.862000   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.862041   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:23:38.862151   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046341520 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.872893   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.872922   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:23:38.873036   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054254500 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.883326   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.894333   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.033277   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.262619   14731 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 10:23:39.264955   14731 out.go:177] * Verifying gcp-auth addon...
	I0916 10:23:39.266807   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:23:39.268717   14731 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:23:39.310878   14731 pod_ready.go:98] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310904   14731 pod_ready.go:82] duration metric: took 7.508146008s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	E0916 10:23:39.310915   14731 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310924   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:39.395512   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.532567   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.894633   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.033580   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.394602   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.533200   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.815447   14731 pod_ready.go:93] pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.815468   14731 pod_ready.go:82] duration metric: took 1.504536219s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.815477   14731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819153   14731 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.819171   14731 pod_ready.go:82] duration metric: took 3.688538ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819180   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822800   14731 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.822815   14731 pod_ready.go:82] duration metric: took 3.628798ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822823   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826537   14731 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.826556   14731 pod_ready.go:82] duration metric: took 3.726729ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826567   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.894014   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.906975   14731 pod_ready.go:93] pod "kube-proxy-gm7kv" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.906995   14731 pod_ready.go:82] duration metric: took 80.421296ms for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.907005   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.033182   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.307459   14731 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.307479   14731 pod_ready.go:82] duration metric: took 400.467827ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.307488   14731 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.394410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.532263   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.707267   14731 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.707293   14731 pod_ready.go:82] duration metric: took 399.79657ms for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.707305   14731 pod_ready.go:39] duration metric: took 9.918041839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:41.707331   14731 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:23:41.707469   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:41.727079   14731 api_server.go:72] duration metric: took 10.620002836s to wait for apiserver process to appear ...
	I0916 10:23:41.727105   14731 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:23:41.727130   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:41.731666   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:41.732551   14731 api_server.go:141] control plane version: v1.31.1
	I0916 10:23:41.732571   14731 api_server.go:131] duration metric: took 5.460229ms to wait for apiserver health ...
	I0916 10:23:41.732579   14731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:23:41.894027   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.998997   14731 system_pods.go:59] 17 kube-system pods found
	I0916 10:23:41.999033   14731 system_pods.go:61] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:41.999047   14731 system_pods.go:61] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:41.999057   14731 system_pods.go:61] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:41.999075   14731 system_pods.go:61] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:41.999087   14731 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:41.999092   14731 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:41.999096   14731 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:41.999099   14731 system_pods.go:61] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:41.999104   14731 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:41.999111   14731 system_pods.go:61] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:41.999114   14731 system_pods.go:61] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:41.999127   14731 system_pods.go:61] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:41.999138   14731 system_pods.go:61] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:41.999147   14731 system_pods.go:61] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999159   14731 system_pods.go:61] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999164   14731 system_pods.go:61] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:41.999175   14731 system_pods.go:61] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:41.999182   14731 system_pods.go:74] duration metric: took 266.598276ms to wait for pod list to return data ...
	I0916 10:23:41.999196   14731 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:23:42.032591   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.106881   14731 default_sa.go:45] found service account: "default"
	I0916 10:23:42.106907   14731 default_sa.go:55] duration metric: took 107.703967ms for default service account to be created ...
	I0916 10:23:42.106918   14731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:23:42.375306   14731 system_pods.go:86] 17 kube-system pods found
	I0916 10:23:42.375339   14731 system_pods.go:89] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:42.375347   14731 system_pods.go:89] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:42.375355   14731 system_pods.go:89] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:42.375362   14731 system_pods.go:89] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:42.375367   14731 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:42.375372   14731 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:42.375377   14731 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:42.375382   14731 system_pods.go:89] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:42.375385   14731 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:42.375395   14731 system_pods.go:89] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:42.375400   14731 system_pods.go:89] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:42.375405   14731 system_pods.go:89] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:42.375411   14731 system_pods.go:89] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:42.375417   14731 system_pods.go:89] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375425   14731 system_pods.go:89] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375429   14731 system_pods.go:89] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:42.375435   14731 system_pods.go:89] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:42.375442   14731 system_pods.go:126] duration metric: took 268.518179ms to wait for k8s-apps to be running ...
	I0916 10:23:42.375451   14731 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:23:42.375494   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:42.387115   14731 system_svc.go:56] duration metric: took 11.655134ms WaitForService to wait for kubelet
	I0916 10:23:42.387140   14731 kubeadm.go:582] duration metric: took 11.2800718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:42.387171   14731 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:23:42.394773   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.507386   14731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:23:42.507413   14731 node_conditions.go:123] node cpu capacity is 8
	I0916 10:23:42.507426   14731 node_conditions.go:105] duration metric: took 120.250263ms to run NodePressure ...
	I0916 10:23:42.507440   14731 start.go:241] waiting for startup goroutines ...
	I0916 10:23:42.531600   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.894380   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.032814   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.393764   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.533097   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.895538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.033018   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.394939   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.532533   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.923857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.032464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.395518   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.532657   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.894621   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.033157   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.394820   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.533142   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.894150   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.032554   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.394103   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.532755   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.923101   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.032246   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.393952   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.531988   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.894443   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.032216   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.395492   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.532583   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.894398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.033134   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.394173   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.532730   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.895356   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.032410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.394499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.532834   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.894466   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.032976   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.393504   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.532575   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.895473   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.032897   14731 kapi.go:107] duration metric: took 20.503936091s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:53.395464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.897663   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.395912   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.895542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.394636   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.895289   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.394104   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.894685   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.394359   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.894369   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.394113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.895010   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.394765   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.895050   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.394699   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.893904   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.394519   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.893535   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.394889   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.894397   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.441082   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.893998   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.395141   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.895375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.395269   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.896063   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.394972   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.894856   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.395279   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.895293   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.394857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.896499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.394125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.895033   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.395202   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.894724   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.394201   14731 kapi.go:107] duration metric: took 36.504077115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:20.771019   14731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:20.771044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.269732   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.769379   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.270108   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.770020   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.270002   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.769993   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.270052   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.770494   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.270065   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.770030   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.269978   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.769822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.269485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.770749   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.270006   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.769786   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.269361   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.770193   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.270017   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.769639   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.269368   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.770132   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.270538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.770922   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.270016   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.770707   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.269925   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.770343   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.270669   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.770484   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.269981   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.770067   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.269913   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.769999   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.269695   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.769660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.270376   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.770125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.270113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.769635   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.269392   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.770622   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.270727   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.771121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.270788   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.779792   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.269641   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.771197   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.270296   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.770234   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.270660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.770461   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.270582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.770582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.269826   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.769427   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.270745   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.769804   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.270843   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.770187   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.270064   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.769562   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.270917   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.769965   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.270218   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.770822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.269777   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.770121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.269909   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.770485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.271044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.770398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.270401   14731 kapi.go:107] duration metric: took 1m18.003594843s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:24:57.272413   14731 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 10:24:57.273706   14731 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:24:57.274969   14731 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:24:57.276179   14731 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, metrics-server, helm-tiller, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, volcano, registry, csi-hostpath-driver, gcp-auth
	I0916 10:24:57.277503   14731 addons.go:510] duration metric: took 1m26.177945157s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd metrics-server helm-tiller storage-provisioner storage-provisioner-rancher inspektor-gadget volumesnapshots volcano registry csi-hostpath-driver gcp-auth]
	I0916 10:24:57.277539   14731 start.go:246] waiting for cluster config update ...
	I0916 10:24:57.277557   14731 start.go:255] writing updated cluster config ...
	I0916 10:24:57.277828   14731 exec_runner.go:51] Run: rm -f paused
	I0916 10:24:57.280918   14731 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:24:57.282289   14731 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:36:27 UTC. --
	Sep 16 10:24:56 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:24:56Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394894Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.921394785Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:24:57 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:57.923527826Z" level=error msg="Error running exec 40de4d4402a849a66630e4b3e224b5cac52a3344d4191ab61093c755f1eae2f9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:24:58 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:58.030336094Z" level=info msg="ignoring event" container=063696e8a73aabc89418d2c58e71706ba02ccbbecf8ff00cbae4ce69ab4d8dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:25:38 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:25:38Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:25:40 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:25:40.013070122Z" level=info msg="ignoring event" container=285e9d3bf61063164576db1e8b56067f2715f3125c65a408fb460b33df4e0df3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:27:12 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:27:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836428Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836085Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.785558764Z" level=error msg="Error running exec 13e088d02d0a5f22acc5e5b1a4471ba70b2f244b367260c945e607695da23676 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799299215Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799311411Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.801146259Z" level=error msg="Error running exec 8124ff9355b2b195f4666e956e5c04835c7ab5bbca41ab5f07f5d54c9a438e8a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.997546489Z" level=info msg="ignoring event" container=f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:01 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:30:01Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860094779Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860112359Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.861900754Z" level=error msg="Error running exec 7325b4844d467316c92c35912814ef76ffc52ab0706fc16a141d2d4c86eec807 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:03 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:03.053613980Z" level=info msg="ignoring event" container=f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.355786042Z" level=info msg="ignoring event" container=bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.422842358Z" level=info msg="ignoring event" container=6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.489977617Z" level=info msg="ignoring event" container=bede25b8f44c47a7583d31e5f552ceb2818b45bf9b6e66175cefd80b6e4a1ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.585848075Z" level=info msg="ignoring event" container=8a0796a6fd139e34146729f05330e8554afd338b598fd53c135d700704cea580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:16 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:16.809464495Z" level=info msg="ignoring event" container=3902ec2c22c138271b7c612de2b2ec28e9b3e2406519c1a03ab3d1e1760a1146 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b806437d39cb5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 11 minutes ago      Running             gcp-auth                                 0                   872b837fda1bc       gcp-auth-89d5ffd79-wt6q9
	6b6303f81cb52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          12 minutes ago      Running             csi-snapshotter                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d549f78521f57       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          12 minutes ago      Running             csi-provisioner                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	9125db73d99e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            12 minutes ago      Running             liveness-probe                           0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	87c37483d2112       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           12 minutes ago      Running             hostpath                                 0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	cd42401f74b1d       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         12 minutes ago      Running             admission                                0                   d5cc1eab65661       volcano-admission-77d7d48b68-t975d
	0c0ddb709904f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                12 minutes ago      Running             node-driver-registrar                    0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	b0782903176d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   fb9dfe220b3dc       csi-hostpath-resizer-0
	4edaa9f0351e1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   fa27205224e9f       csi-hostpath-attacher-0
	f0ce5f8efdc2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d35f343c48bcb       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               12 minutes ago      Running             volcano-scheduler                        0                   ca6d7d9980376       volcano-scheduler-576bc46687-l88qd
	3fa7892ed6588       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      12 minutes ago      Running             volcano-controllers                      0                   1d8c71b5408cc       volcano-controllers-56675bb4d5-kd2r2
	23bdeff0c7c03       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         12 minutes ago      Exited              main                                     0                   2684a290edfd1       volcano-admission-init-4rd4m
	a7c6ba8b5b8e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   2a9eff5290337       snapshot-controller-56fcc65765-c729p
	59e2e493c17f7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   a62d801d6adc1       snapshot-controller-56fcc65765-hhv7d
	c5ee33602669d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   6fcb08908435e       local-path-provisioner-86d989889c-xpx7m
	fe6d1bd912755       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  12 minutes ago      Running             tiller                                   0                   4cc0471023071       tiller-deploy-b48cc5f79-jhzqk
	c2bb3772d49b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        12 minutes ago      Running             yakd                                     0                   54361ea6661c2       yakd-dashboard-67d98fc6b-ggfmd
	1c9f6a3099faf       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   1d5dec60ab67a       metrics-server-84c5f94fbc-wfrnf
	566744d15c91f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   2ce78388a8512       cloud-spanner-emulator-769b77f747-7x6cj
	1cb6e9270416d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     12 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6c5f84705a086       nvidia-device-plugin-daemonset-dcrh9
	e19218997c830       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   debc24e02ca98       storage-provisioner
	e0a1b4e718aed       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   44104ce9decd6       coredns-7c65d6cfc9-vlmkz
	95dfe8f64bc6f       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   3eddba63436f7       kube-proxy-gm7kv
	236092569fa7f       2e96e5913fc06                                                                                                                                13 minutes ago      Running             etcd                                     0                   f4c192de28c8e       etcd-ubuntu-20-agent-2
	f656d4b3e221b       6bab7719df100                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   13c6d1481d7e3       kube-apiserver-ubuntu-20-agent-2
	abadc50dd44f1       175ffd71cce3d                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   2dd1e926360a9       kube-controller-manager-ubuntu-20-agent-2
	0412032e5006c       9aa1fad941575                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   b7f61176a82d0       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [e0a1b4e718ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59960 - 9097 "HINFO IN 5932384522844147917.1993008146596938559. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018267326s
	[INFO] 10.244.0.24:39221 - 38983 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000387765s
	[INFO] 10.244.0.24:57453 - 43799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000481367s
	[INFO] 10.244.0.24:56558 - 1121 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126982s
	[INFO] 10.244.0.24:37367 - 64790 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137381s
	[INFO] 10.244.0.24:53874 - 61210 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129517s
	[INFO] 10.244.0.24:35488 - 47376 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167054s
	[INFO] 10.244.0.24:39756 - 34231 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003382584s
	[INFO] 10.244.0.24:42692 - 8269 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003496461s
	[INFO] 10.244.0.24:40495 - 49254 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00344128s
	[INFO] 10.244.0.24:54381 - 40672 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003513746s
	[INFO] 10.244.0.24:45458 - 51280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002837809s
	[INFO] 10.244.0.24:39080 - 48381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003158709s
	[INFO] 10.244.0.24:49164 - 30651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00123377s
	[INFO] 10.244.0.24:33687 - 1000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001779254s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:36:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7x6cj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-wt6q9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-vlmkz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-x6gtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-gm7kv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-wfrnf              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-dcrh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-c729p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-hhv7d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-jhzqk                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xpx7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-77d7d48b68-t975d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-controllers-56675bb4d5-kd2r2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-scheduler-576bc46687-l88qd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggfmd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x6 over 13m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 4f 68 84 7c 26 08 06
	[  +0.029810] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 4a d1 e3 09 35 08 06
	[  +2.541456] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 35 1c 77 2c 6a 08 06
	[Sep16 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 2e 0e e0 53 6a 08 06
	[  +1.979621] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	
	
	==> etcd [236092569fa7] <==
	{"level":"info","ts":"2024-09-16T10:23:22.169311Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.169894Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:23:22.169903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.169924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.170145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170298Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.171038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:23:22.172233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:23:34.396500Z","caller":"traceutil/trace.go:171","msg":"trace[1443924902] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"122.443714ms","start":"2024-09-16T10:23:34.274027Z","end":"2024-09-16T10:23:34.396470Z","steps":["trace[1443924902] 'process raft request'  (duration: 42.860188ms)","trace[1443924902] 'compare'  (duration: 79.401186ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:23:34.396568Z","caller":"traceutil/trace.go:171","msg":"trace[1914523289] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"119.254337ms","start":"2024-09-16T10:23:34.277291Z","end":"2024-09-16T10:23:34.396545Z","steps":["trace[1914523289] 'process raft request'  (duration: 119.164267ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396664Z","caller":"traceutil/trace.go:171","msg":"trace[551861205] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"121.694141ms","start":"2024-09-16T10:23:34.274951Z","end":"2024-09-16T10:23:34.396645Z","steps":["trace[551861205] 'process raft request'  (duration: 121.454274ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396765Z","caller":"traceutil/trace.go:171","msg":"trace[612276300] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"117.724007ms","start":"2024-09-16T10:23:34.279030Z","end":"2024-09-16T10:23:34.396754Z","steps":["trace[612276300] 'process raft request'  (duration: 117.466969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396775Z","caller":"traceutil/trace.go:171","msg":"trace[485760124] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"107.084096ms","start":"2024-09-16T10:23:34.289681Z","end":"2024-09-16T10:23:34.396765Z","steps":["trace[485760124] 'process raft request'  (duration: 106.857041ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396851Z","caller":"traceutil/trace.go:171","msg":"trace[655456638] linearizableReadLoop","detail":"{readStateIndex:770; appliedIndex:767; }","duration":"117.963693ms","start":"2024-09-16T10:23:34.278878Z","end":"2024-09-16T10:23:34.396842Z","steps":["trace[655456638] 'read index received'  (duration: 5.820633ms)","trace[655456638] 'applied index is now lower than readState.Index'  (duration: 112.141241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:23:34.396925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.026308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:23:34.396979Z","caller":"traceutil/trace.go:171","msg":"trace[1000991150] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate; range_end:; response_count:0; response_revision:752; }","duration":"118.092731ms","start":"2024-09-16T10:23:34.278875Z","end":"2024-09-16T10:23:34.396968Z","steps":["trace[1000991150] 'agreement among raft nodes before linearized reading'  (duration: 118.006643ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:38.471576Z","caller":"traceutil/trace.go:171","msg":"trace[1536302833] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"154.211147ms","start":"2024-09-16T10:23:38.317339Z","end":"2024-09-16T10:23:38.471550Z","steps":["trace[1536302833] 'process raft request'  (duration: 154.053853ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:22.188338Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2024-09-16T10:33:22.212714Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1554,"took":"23.934179ms","hash":4226216058,"current-db-size-bytes":7352320,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":3911680,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-16T10:33:22.212758Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226216058,"revision":1554,"compact-revision":-1}
	
	
	==> gcp-auth [b806437d39cb] <==
	2024/09/16 10:24:56 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:36:28 up 18 min,  0 users,  load average: 0.07, 0.17, 0.17
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f656d4b3e221] <==
	W0916 10:24:03.532842       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:04.623446       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:05.663512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:06.687369       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:07.741783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:08.796077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:09.892806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.278243       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.278280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.279887       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.290102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.290145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.291730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.911493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:11.942936       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:13.040622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:14.059340       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:20.272187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:20.272230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.287211       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.287254       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.296283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.296314       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	I0916 10:30:16.763857       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:30:17.782395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [abadc50dd44f] <==
	I0916 10:30:30.134293       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:30:30.134330       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:30:30.550675       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:30:30.550720       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:30:32.908167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	W0916 10:30:37.961768       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:37.961805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:30:52.834739       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:52.834781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:31:29.517193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:31:29.517235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:32:14.237055       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:32:14.237103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:04.260642       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:04.260689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:49.953230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:49.953271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:34:30.366531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:34:30.366573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:18.546778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:18.546822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:35:38.907117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	W0916 10:36:03.761315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:03.761365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:36:27.239533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.183µs"
	
	
	==> kube-proxy [95dfe8f64bc6] <==
	I0916 10:23:31.205838       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:31.406402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:23:31.406455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:31.489030       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:31.489102       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:31.508985       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:31.509483       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:31.509513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:31.539926       1 config.go:199] "Starting service config controller"
	I0916 10:23:31.540054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:31.559259       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:31.559278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:31.559824       1 config.go:328] "Starting node config controller"
	I0916 10:23:31.559836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:31.641834       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:31.660551       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:23:31.660598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0412032e5006] <==
	W0916 10:23:23.040568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:23.040650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.040660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:23.040674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:23.040716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.040756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.848417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.848457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.947205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.947244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.963782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:23.963827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.018222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:23:24.018276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.056374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:24.056418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.187965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:24.188004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.200436       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:24.200484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:23:24.239846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:24.239894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:27.139487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:36:28 UTC. --
	Sep 16 10:30:11 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:11.394622   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6713b497-3d64-4b59-8553-56cccb541c50" path="/var/lib/kubelet/pods/6713b497-3d64-4b59-8553-56cccb541c50/volumes"
	Sep 16 10:30:11 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:11.395271   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b3bd156-0501-41a1-8285-865292e17bd7" path="/var/lib/kubelet/pods/6b3bd156-0501-41a1-8285-865292e17bd7/volumes"
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051499   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-run\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051549   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-host\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051576   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-debugfs\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051600   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-modules\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051608   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-run" (OuterVolumeSpecName: "run") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051633   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdbd4\" (UniqueName: \"kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051661   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-cgroup\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051658   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-host" (OuterVolumeSpecName: "host") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051675   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-debugfs" (OuterVolumeSpecName: "debugfs") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051684   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-bpffs\") pod \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\" (UID: \"c0a97873-e0c3-41a1-af0b-2ece8d95b20a\") "
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051682   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-cgroup" (OuterVolumeSpecName: "cgroup") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051668   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-modules" (OuterVolumeSpecName: "modules") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051704   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-bpffs" (OuterVolumeSpecName: "bpffs") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051805   16162 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-run\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051822   16162 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-host\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.051830   16162 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-debugfs\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.054072   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4" (OuterVolumeSpecName: "kube-api-access-bdbd4") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "kube-api-access-bdbd4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.059883   16162 scope.go:117] "RemoveContainer" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f"
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152877   16162 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-modules\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152906   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bdbd4\" (UniqueName: \"kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152918   16162 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-cgroup\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152930   16162 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-bpffs\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.391044   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a" path="/var/lib/kubelet/pods/c0a97873-e0c3-41a1-af0b-2ece8d95b20a/volumes"
	
	
	==> storage-provisioner [e19218997c83] <==
	I0916 10:23:33.807788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:23:33.819755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:23:33.821506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:23:33.836239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:23:33.837177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	I0916 10:23:33.840556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"272307eb-dbc1-400e-a5a3-6595c2b694d1", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407 became leader
	I0916 10:23:33.937802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (300.437µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/MetricsServer (366.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (92.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 6.73172ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00416115s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (405.84µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (383.324µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (402.079µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (414.156µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (421.124µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (404.062µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (459.032µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (395.488µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (364.414µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (407.018µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (428.904µs)
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (470.151µs)
addons_test.go:489: failed checking helm tiller: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40127               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|         | helm-tiller --alsologtostderr        |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:13.140706   14731 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:13.140813   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140821   14731 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:13.140825   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140993   14731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:23:13.141565   14731 out.go:352] Setting JSON to false
	I0916 10:23:13.142443   14731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":344,"bootTime":1726481849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:13.142536   14731 start.go:139] virtualization: kvm guest
	I0916 10:23:13.144838   14731 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:23:13.146162   14731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:13.146197   14731 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:13.146202   14731 notify.go:220] Checking for updates...
	I0916 10:23:13.148646   14731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:13.149886   14731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:13.151023   14731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:23:13.152258   14731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:13.153558   14731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:13.154983   14731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:13.165097   14731 out.go:177] * Using the none driver based on user configuration
	I0916 10:23:13.166355   14731 start.go:297] selected driver: none
	I0916 10:23:13.166366   14731 start.go:901] validating driver "none" against <nil>
	I0916 10:23:13.166376   14731 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:13.166401   14731 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:23:13.166708   14731 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:23:13.167363   14731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:13.167640   14731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:13.167685   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:13.167734   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:13.167744   14731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:13.167818   14731 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:13.169383   14731 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:23:13.171024   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.171056   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:13.171177   14731 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:13.171205   14731 start.go:364] duration metric: took 15.507µs to acquireMachinesLock for "minikube"
	I0916 10:23:13.171217   14731 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:13.171280   14731 start.go:125] createHost starting for "" (driver="none")
	I0916 10:23:13.173420   14731 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:23:13.174682   14731 exec_runner.go:51] Run: systemctl --version
	I0916 10:23:13.177006   14731 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:23:13.177034   14731 client.go:168] LocalClient.Create starting
	I0916 10:23:13.177131   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:23:13.177168   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177190   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177253   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:23:13.177275   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177285   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177573   14731 client.go:171] duration metric: took 533.456µs to LocalClient.Create
	I0916 10:23:13.177599   14731 start.go:167] duration metric: took 593.576µs to libmachine.API.Create "minikube"
	I0916 10:23:13.177608   14731 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:23:13.177642   14731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:13.177683   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:13.187236   14731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:13.187263   14731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:13.187275   14731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:13.189044   14731 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:23:13.190345   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:23:13.190401   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:23:13.190422   14731 start.go:296] duration metric: took 12.809081ms for postStartSetup
	I0916 10:23:13.191528   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.191738   14731 start.go:128] duration metric: took 20.449605ms to createHost
	I0916 10:23:13.191749   14731 start.go:83] releasing machines lock for "minikube", held for 20.535411ms
	I0916 10:23:13.192580   14731 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:13.192644   14731 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:23:13.194590   14731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:23:13.194649   14731 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:13.202734   14731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:23:13.202757   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.202792   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.202889   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.222327   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:13.230703   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:13.239020   14731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:13.239101   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:13.248805   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.257191   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:13.265887   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.274565   14731 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:13.283401   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:13.292383   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:13.300868   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:13.309031   14731 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:13.315780   14731 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:13.322874   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:13.538903   14731 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:23:13.606063   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.606117   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.606219   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.625810   14731 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:23:13.626697   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:23:13.634078   14731 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:23:13.634095   14731 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.634125   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.641943   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:23:13.642067   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube17162235 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.649525   14731 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:23:13.864371   14731 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:23:14.080198   14731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:23:14.080354   14731 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:23:14.080369   14731 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:23:14.080415   14731 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:23:14.088510   14731 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:23:14.088647   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258152288 /etc/docker/daemon.json
	I0916 10:23:14.096396   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:14.312903   14731 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:23:14.614492   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:23:14.624711   14731 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:23:14.641378   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:14.651444   14731 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:23:14.875541   14731 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:23:15.086384   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.300370   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:23:15.313951   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:15.324456   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.540454   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:23:15.606406   14731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:23:15.606476   14731 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:23:15.607900   14731 start.go:563] Will wait 60s for crictl version
	I0916 10:23:15.607956   14731 exec_runner.go:51] Run: which crictl
	I0916 10:23:15.608880   14731 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:23:15.638324   14731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:23:15.638393   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.658714   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.681662   14731 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:23:15.681764   14731 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:15.684836   14731 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:23:15.686171   14731 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:15.686280   14731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:15.686290   14731 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:23:15.686371   14731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:23:15.686410   14731 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:23:15.733026   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.733051   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:15.733070   14731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:15.733090   14731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:15.733254   14731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:15.733305   14731 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.741208   14731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:23:15.741251   14731 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.748963   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:23:15.748989   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:23:15.748971   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:23:15.749021   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:15.749048   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:23:15.749023   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:23:15.759703   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:23:15.804184   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000397322 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:23:15.808532   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3573748997 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:23:15.825059   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036820018 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:23:15.890865   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:15.899083   14731 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:23:15.899106   14731 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.899146   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.906895   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:23:15.907034   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686635375 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.914549   14731 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:23:15.914568   14731 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:23:15.914597   14731 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:23:15.921424   14731 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:15.921543   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube124460998 /lib/systemd/system/kubelet.service
	I0916 10:23:15.930481   14731 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:23:15.930611   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4089828324 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:23:15.938132   14731 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:15.939361   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:16.143380   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:16.158863   14731 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:23:16.158890   14731 certs.go:194] generating shared ca certs ...
	I0916 10:23:16.158911   14731 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.159062   14731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:23:16.159122   14731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:23:16.159135   14731 certs.go:256] generating profile certs ...
	I0916 10:23:16.159199   14731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:23:16.159225   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:23:16.405613   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:23:16.405642   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405782   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:23:16.405793   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405856   14731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:23:16.405870   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:23:16.569943   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:23:16.569971   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570095   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:23:16.570104   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570154   14731 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:23:16.570220   14731 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:23:16.570270   14731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:23:16.570283   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:23:16.840205   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:23:16.840238   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840383   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:23:16.840393   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840537   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:23:16.840569   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:16.840594   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:16.840624   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:16.841173   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:16.841296   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube746649098 /var/lib/minikube/certs/ca.crt
	I0916 10:23:16.850974   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:23:16.851102   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216583324 /var/lib/minikube/certs/ca.key
	I0916 10:23:16.859052   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:16.859162   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2429656602 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:23:16.867993   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:16.868122   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31356631 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:23:16.876316   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:23:16.876432   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2172809749 /var/lib/minikube/certs/apiserver.crt
	I0916 10:23:16.883937   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:16.884043   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3752504884 /var/lib/minikube/certs/apiserver.key
	I0916 10:23:16.891211   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:16.891348   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1611886685 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:23:16.898521   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:16.898630   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414896728 /var/lib/minikube/certs/proxy-client.key
	I0916 10:23:16.905794   14731 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:23:16.905813   14731 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.905843   14731 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.913039   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:16.913160   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817740740 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.920335   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:16.920430   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902791778 /var/lib/minikube/kubeconfig
	I0916 10:23:16.929199   14731 exec_runner.go:51] Run: openssl version
	I0916 10:23:16.931944   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:16.940176   14731 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941576   14731 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941622   14731 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.944402   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:16.952213   14731 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:16.953336   14731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:16.953373   14731 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:16.953468   14731 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:23:16.968833   14731 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:16.976751   14731 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:16.984440   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:17.005001   14731 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:17.013500   14731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:17.013523   14731 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:17.013559   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:17.021530   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:17.021577   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:17.029363   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:17.038339   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:17.038392   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:17.046433   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.055974   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:17.056021   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.064002   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:17.087369   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:17.087421   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:17.094700   14731 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:23:17.125739   14731 kubeadm.go:310] W0916 10:23:17.125617   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.126248   14731 kubeadm.go:310] W0916 10:23:17.126207   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.127875   14731 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:17.127925   14731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:17.218197   14731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:17.218241   14731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:17.218245   14731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:17.218250   14731 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:17.228659   14731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:17.231432   14731 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:17.231476   14731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:17.231492   14731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:17.409888   14731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:17.475990   14731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:17.539491   14731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:17.796104   14731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:18.073234   14731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:18.073357   14731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.366388   14731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:18.366499   14731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.555987   14731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:18.639688   14731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:18.710297   14731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:18.710445   14731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:19.161742   14731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:19.258436   14731 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:19.315076   14731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:19.572576   14731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:19.765615   14731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:19.766182   14731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:19.768469   14731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:19.770925   14731 out.go:235]   - Booting up control plane ...
	I0916 10:23:19.770956   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:19.770979   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:19.770988   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:19.791511   14731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:19.797034   14731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:19.797064   14731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:20.020707   14731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:20.020728   14731 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:20.522367   14731 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.615965ms
	I0916 10:23:20.522388   14731 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:24.524089   14731 kubeadm.go:310] [api-check] The API server is healthy after 4.001711526s
	I0916 10:23:24.534645   14731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:24.545508   14731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:24.561586   14731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:24.561610   14731 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:24.569540   14731 kubeadm.go:310] [bootstrap-token] Using token: 60y8iu.vk0rxdhc25utw4uo
	I0916 10:23:24.571078   14731 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:24.571112   14731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:24.575563   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:24.581879   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:24.584635   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:24.587409   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:24.589877   14731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:24.929369   14731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:25.351323   14731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:25.929753   14731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:25.930651   14731 kubeadm.go:310] 
	I0916 10:23:25.930669   14731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:25.930673   14731 kubeadm.go:310] 
	I0916 10:23:25.930677   14731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:25.930693   14731 kubeadm.go:310] 
	I0916 10:23:25.930705   14731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:25.930710   14731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:25.930713   14731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:25.930717   14731 kubeadm.go:310] 
	I0916 10:23:25.930721   14731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:25.930725   14731 kubeadm.go:310] 
	I0916 10:23:25.930730   14731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:25.930737   14731 kubeadm.go:310] 
	I0916 10:23:25.930742   14731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:25.930749   14731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:25.930753   14731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:25.930759   14731 kubeadm.go:310] 
	I0916 10:23:25.930763   14731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:25.930765   14731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:25.930768   14731 kubeadm.go:310] 
	I0916 10:23:25.930770   14731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930773   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:23:25.930778   14731 kubeadm.go:310] 	--control-plane 
	I0916 10:23:25.930781   14731 kubeadm.go:310] 
	I0916 10:23:25.930784   14731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:25.930791   14731 kubeadm.go:310] 
	I0916 10:23:25.930794   14731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930798   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:23:25.933502   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:25.933525   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:25.935106   14731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:23:25.936272   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:23:25.946405   14731 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:23:25.946528   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951121141 /etc/cni/net.d/1-k8s.conflist
	I0916 10:23:25.957597   14731 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:25.957652   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:25.957691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:23:25.966602   14731 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:26.024809   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:26.524979   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.025101   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.525561   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.024962   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.525631   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.025594   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.525691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.024918   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.524850   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.024821   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.098521   14731 kubeadm.go:1113] duration metric: took 5.140910239s to wait for elevateKubeSystemPrivileges
	I0916 10:23:31.098550   14731 kubeadm.go:394] duration metric: took 14.145180358s to StartCluster
	I0916 10:23:31.098572   14731 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.098640   14731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:31.099273   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.099484   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:31.099563   14731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:31.099694   14731 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:23:31.099713   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.099725   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:23:31.099724   14731 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 10:23:31.099749   14731 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 10:23:31.099762   14731 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 10:23:31.099777   14731 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 10:23:31.099788   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.099807   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100187   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100203   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100227   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100376   14731 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 10:23:31.100405   14731 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 10:23:31.100436   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100438   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100445   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100453   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100459   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100485   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100491   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100769   14731 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 10:23:31.100790   14731 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 10:23:31.100826   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.101070   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101090   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101123   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101267   14731 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 10:23:31.101295   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 10:23:31.101510   14731 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 10:23:31.101527   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101535   14731 mustload.go:65] Loading cluster: minikube
	I0916 10:23:31.101541   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101572   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101737   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.101867   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101887   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101919   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102148   14731 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 10:23:31.102169   14731 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 10:23:31.102195   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.102220   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102233   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102253   14731 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 10:23:31.102265   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102298   14731 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:31.102312   14731 out.go:177] * Configuring local host environment ...
	I0916 10:23:31.102789   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102801   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102825   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.103836   14731 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 10:23:31.103861   14731 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 10:23:31.103905   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104241   14731 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:23:31.104257   14731 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:23:31.104275   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104742   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104753   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104763   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104773   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104784   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104812   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104956   14731 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 10:23:31.102331   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104975   14731 addons.go:69] Setting registry=true in profile "minikube"
	I0916 10:23:31.104984   14731 addons.go:234] Setting addon registry=true in "minikube"
	I0916 10:23:31.105000   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.105157   14731 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 10:23:31.105184   14731 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 10:23:31.105213   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104967   14731 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 10:23:31.105323   14731 host.go:66] Checking if "minikube" exists ...
	W0916 10:23:31.106873   14731 out.go:270] * 
	W0916 10:23:31.106888   14731 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:23:31.106896   14731 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:23:31.106903   14731 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:23:31.106909   14731 out.go:270] * 
	W0916 10:23:31.106955   14731 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:23:31.106962   14731 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:23:31.106971   14731 out.go:270] * 
	W0916 10:23:31.106995   14731 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:23:31.107002   14731 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:23:31.107009   14731 out.go:270] * 
	W0916 10:23:31.107018   14731 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:23:31.107045   14731 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:31.107984   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.107997   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.108026   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.108454   14731 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:31.109770   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.109792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.109828   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.110054   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:31.124712   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.127087   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.128504   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.130104   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.138756   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.138792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.138831   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.139721   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.139749   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.139779   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.142090   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142122   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142129   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142151   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142345   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.156934   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.156999   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.158343   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.158400   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.160580   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.163820   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.169364   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.171885   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.171953   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.173802   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.173849   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.174374   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.174420   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176241   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.176292   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176846   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.185299   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.186516   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.186575   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.194708   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.194738   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.194977   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.195032   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199863   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199893   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199933   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199946   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.200834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.200854   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.201607   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.201750   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.205007   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.205028   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.205039   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.205094   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.206485   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.210587   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.212372   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.212395   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.213745   14731 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:31.214160   14731 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:23:31.214415   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.216499   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.216520   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.216547   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.217076   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.217112   14731 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:31.217909   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube143406645 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.218842   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.219226   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.219253   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.220512   14731 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:31.220867   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.221546   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.223173   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.221979   14731 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.223461   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:31.223768   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3150586776 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.225359   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.227613   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.227660   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.229063   14731 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:31.229334   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:31.230849   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.230883   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231177   14731 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:31.231657   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.231693   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234554   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231695   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234684   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.232274   14731 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:31.235888   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.236046   14731 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236071   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:31.236209   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107188705 /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236904   14731 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:31.238542   14731 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.238573   14731 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:31.238771   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2095578904 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.239882   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.240045   14731 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:31.244446   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.245954   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:31.246834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252064   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.246956   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.252578   14731 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.252624   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:31.246990   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252873   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247002   14731 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:31.253137   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube95020260 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.247038   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253167   14731 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:31.253286   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2405129530 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253617   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.253668   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.247061   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.253722   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247236   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:31.255868   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.255894   14731 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:31.255954   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.255976   14731 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:31.256002   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671809590 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.256098   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1236849984 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.257119   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.257771   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:31.259551   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259704   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259965   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.260128   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.260751   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.261489   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.261250   14731 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:31.261394   14731 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 10:23:31.262031   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.262778   14731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:31.262782   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.262800   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.262829   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.262833   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:31.264514   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264537   14731 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:23:31.264545   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264584   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264768   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:31.264924   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.264959   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:31.265088   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2364820269 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.266759   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.268033   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:31.268086   14731 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:31.269452   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:31.269500   14731 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:31.272346   14731 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272373   14731 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:31.272497   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754220183 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272890   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:31.275160   14731 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275188   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:31.275361   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2480903723 /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275532   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:31.277158   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277179   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:31.277664   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube478526718 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277859   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.277882   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:31.278022   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636867839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.290799   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.290835   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:31.291218   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3814086991 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.295428   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.295459   14731 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:31.295604   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3740101312 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.306392   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.306425   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.311213   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.311248   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:31.311424   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube747122049 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.312994   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.313036   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.317835   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.318230   14731 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:31.323578   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube338558244 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.341814   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.341846   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:31.341971   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323528791 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.342204   14731 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.342226   14731 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:31.342566   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.342625   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.342837   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.342890   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube292318438 /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.343078   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.343101   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:31.343219   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032243386 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.358435   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.358525   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358549   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:31.358693   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2881932452 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358881   14731 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:31.359009   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1282728706 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.359505   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.366545   14731 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.366587   14731 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:31.366713   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1171915216 /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.378664   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.378695   14731 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:31.378815   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473351497 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.380393   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.380417   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.382937   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.382966   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:31.383096   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2529455688 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.384304   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.384326   14731 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:31.384438   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881397 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.385231   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.385271   14731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385284   14731 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:23:31.385292   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385328   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.387805   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.387835   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:31.387939   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332358551 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.390197   14731 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.390227   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:31.390366   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46497832 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.397672   14731 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:31.397951   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3186992100 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.403599   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.403630   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:31.403754   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445986553 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.409076   14731 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.409115   14731 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:31.409283   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1651200957 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.415599   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.415621   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:31.415721   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918202348 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.417404   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.423447   14731 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423472   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:31.423586   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube419582909 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423765   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.423804   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.436943   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.438121   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.443433   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.443523   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:31.443757   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41635707 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.462088   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462127   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:31.462266   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805595243 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462657   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:31.462783   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160047024 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.464607   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.476223   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.479433   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.479463   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.482688   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.487583   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.490669   14731 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:31.492378   14731 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:31.493942   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.493975   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:31.494108   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281912972 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.499328   14731 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499357   14731 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:31.499374   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.499400   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:31.499487   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719508217 /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499527   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411641332 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.518103   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.577544   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.577588   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:31.577779   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601059446 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.583317   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.651738   14731 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.651774   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:31.653267   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1921119500 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.672720   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:31.786205   14731 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789214   14731 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:23:31.789238   14731 node_ready.go:38] duration metric: took 2.992874ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789249   14731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:31.802669   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:31.813190   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.813232   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:31.813392   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591024036 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.863589   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.965015   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.965162   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:31.966268   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974451214 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.977982   14731 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:32.088850   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.088892   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:32.089762   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434131392 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.191154   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.191186   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:32.191329   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332266551 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.242672   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.242725   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:32.243830   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2503739100 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.299481   14731 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:32.324442   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.403566   14731 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 10:23:32.489342   14731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 10:23:32.514409   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.096961786s)
	I0916 10:23:32.514451   14731 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 10:23:32.516449   14731 out.go:177] * Verifying registry addon...
	I0916 10:23:32.528963   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:23:32.532579   14731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:32.532675   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:32.570911   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.088181519s)
	I0916 10:23:32.907708   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.389561221s)
	I0916 10:23:32.966699   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.383338477s)
	I0916 10:23:33.052703   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.126489   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262849545s)
	I0916 10:23:33.178161   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713502331s)
	W0916 10:23:33.178208   14731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.178247   14731 retry.go:31] will retry after 159.834349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.338693   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:33.540389   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.809689   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:34.053876   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.539589   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.570200   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.231431807s)
	I0916 10:23:34.612191   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.252641903s)
	I0916 10:23:34.884849   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.560344146s)
	I0916 10:23:34.884890   14731 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:34.886878   14731 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:34.890123   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:34.895733   14731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:34.895758   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.033190   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.396363   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.534375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.895151   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.035637   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.308497   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:36.395655   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.533207   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.895449   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.033542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.395180   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.533433   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.895384   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.033538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.473613   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:23:38.473795   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398753053 /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.474692   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.484004   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:23:38.484134   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434783837 /var/lib/minikube/google_cloud_project
	I0916 10:23:38.494551   14731 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 10:23:38.494595   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:38.495054   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:38.495069   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:38.495094   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:38.511610   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:38.520861   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:38.520914   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:38.529401   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:38.529444   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:38.599469   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:38.599542   14731 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.600327   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.656167   14731 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:23:38.735860   14731 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:38.798815   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.798859   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:23:38.798995   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2626597480 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.808091   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:38.862000   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.862041   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:23:38.862151   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046341520 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.872893   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.872922   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:23:38.873036   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054254500 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.883326   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.894333   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.033277   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.262619   14731 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 10:23:39.264955   14731 out.go:177] * Verifying gcp-auth addon...
	I0916 10:23:39.266807   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:23:39.268717   14731 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:23:39.310878   14731 pod_ready.go:98] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310904   14731 pod_ready.go:82] duration metric: took 7.508146008s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	E0916 10:23:39.310915   14731 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310924   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:39.395512   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.532567   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.894633   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.033580   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.394602   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.533200   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.815447   14731 pod_ready.go:93] pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.815468   14731 pod_ready.go:82] duration metric: took 1.504536219s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.815477   14731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819153   14731 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.819171   14731 pod_ready.go:82] duration metric: took 3.688538ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819180   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822800   14731 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.822815   14731 pod_ready.go:82] duration metric: took 3.628798ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822823   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826537   14731 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.826556   14731 pod_ready.go:82] duration metric: took 3.726729ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826567   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.894014   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.906975   14731 pod_ready.go:93] pod "kube-proxy-gm7kv" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.906995   14731 pod_ready.go:82] duration metric: took 80.421296ms for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.907005   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.033182   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.307459   14731 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.307479   14731 pod_ready.go:82] duration metric: took 400.467827ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.307488   14731 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.394410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.532263   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.707267   14731 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.707293   14731 pod_ready.go:82] duration metric: took 399.79657ms for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.707305   14731 pod_ready.go:39] duration metric: took 9.918041839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:41.707331   14731 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:23:41.707469   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:41.727079   14731 api_server.go:72] duration metric: took 10.620002836s to wait for apiserver process to appear ...
	I0916 10:23:41.727105   14731 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:23:41.727130   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:41.731666   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:41.732551   14731 api_server.go:141] control plane version: v1.31.1
	I0916 10:23:41.732571   14731 api_server.go:131] duration metric: took 5.460229ms to wait for apiserver health ...
	I0916 10:23:41.732579   14731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:23:41.894027   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.998997   14731 system_pods.go:59] 17 kube-system pods found
	I0916 10:23:41.999033   14731 system_pods.go:61] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:41.999047   14731 system_pods.go:61] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:41.999057   14731 system_pods.go:61] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:41.999075   14731 system_pods.go:61] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:41.999087   14731 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:41.999092   14731 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:41.999096   14731 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:41.999099   14731 system_pods.go:61] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:41.999104   14731 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:41.999111   14731 system_pods.go:61] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:41.999114   14731 system_pods.go:61] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:41.999127   14731 system_pods.go:61] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:41.999138   14731 system_pods.go:61] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:41.999147   14731 system_pods.go:61] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999159   14731 system_pods.go:61] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999164   14731 system_pods.go:61] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:41.999175   14731 system_pods.go:61] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:41.999182   14731 system_pods.go:74] duration metric: took 266.598276ms to wait for pod list to return data ...
	I0916 10:23:41.999196   14731 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:23:42.032591   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.106881   14731 default_sa.go:45] found service account: "default"
	I0916 10:23:42.106907   14731 default_sa.go:55] duration metric: took 107.703967ms for default service account to be created ...
	I0916 10:23:42.106918   14731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:23:42.375306   14731 system_pods.go:86] 17 kube-system pods found
	I0916 10:23:42.375339   14731 system_pods.go:89] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:42.375347   14731 system_pods.go:89] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:42.375355   14731 system_pods.go:89] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:42.375362   14731 system_pods.go:89] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:42.375367   14731 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:42.375372   14731 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:42.375377   14731 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:42.375382   14731 system_pods.go:89] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:42.375385   14731 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:42.375395   14731 system_pods.go:89] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:42.375400   14731 system_pods.go:89] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:42.375405   14731 system_pods.go:89] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:42.375411   14731 system_pods.go:89] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:42.375417   14731 system_pods.go:89] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375425   14731 system_pods.go:89] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375429   14731 system_pods.go:89] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:42.375435   14731 system_pods.go:89] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:42.375442   14731 system_pods.go:126] duration metric: took 268.518179ms to wait for k8s-apps to be running ...
	I0916 10:23:42.375451   14731 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:23:42.375494   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:42.387115   14731 system_svc.go:56] duration metric: took 11.655134ms WaitForService to wait for kubelet
	I0916 10:23:42.387140   14731 kubeadm.go:582] duration metric: took 11.2800718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:42.387171   14731 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:23:42.394773   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.507386   14731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:23:42.507413   14731 node_conditions.go:123] node cpu capacity is 8
	I0916 10:23:42.507426   14731 node_conditions.go:105] duration metric: took 120.250263ms to run NodePressure ...
	I0916 10:23:42.507440   14731 start.go:241] waiting for startup goroutines ...
	I0916 10:23:42.531600   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.894380   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.032814   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.393764   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.533097   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.895538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.033018   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.394939   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.532533   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.923857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.032464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.395518   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.532657   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.894621   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.033157   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.394820   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.533142   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.894150   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.032554   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.394103   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.532755   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.923101   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.032246   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.393952   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.531988   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.894443   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.032216   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.395492   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.532583   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.894398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.033134   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.394173   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.532730   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.895356   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.032410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.394499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.532834   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.894466   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.032976   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.393504   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.532575   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.895473   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.032897   14731 kapi.go:107] duration metric: took 20.503936091s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:53.395464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.897663   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.395912   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.895542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.394636   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.895289   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.394104   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.894685   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.394359   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.894369   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.394113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.895010   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.394765   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.895050   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.394699   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.893904   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.394519   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.893535   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.394889   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.894397   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.441082   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.893998   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.395141   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.895375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.395269   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.896063   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.394972   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.894856   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.395279   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.895293   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.394857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.896499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.394125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.895033   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.395202   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.894724   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.394201   14731 kapi.go:107] duration metric: took 36.504077115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:20.771019   14731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:20.771044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.269732   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.769379   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.270108   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.770020   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.270002   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.769993   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.270052   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.770494   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.270065   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.770030   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.269978   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.769822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.269485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.770749   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.270006   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.769786   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.269361   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.770193   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.270017   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.769639   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.269368   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.770132   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.270538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.770922   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.270016   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.770707   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.269925   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.770343   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.270669   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.770484   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.269981   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.770067   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.269913   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.769999   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.269695   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.769660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.270376   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.770125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.270113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.769635   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.269392   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.770622   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.270727   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.771121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.270788   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.779792   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.269641   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.771197   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.270296   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.770234   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.270660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.770461   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.270582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.770582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.269826   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.769427   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.270745   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.769804   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.270843   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.770187   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.270064   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.769562   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.270917   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.769965   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.270218   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.770822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.269777   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.770121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.269909   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.770485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.271044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.770398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.270401   14731 kapi.go:107] duration metric: took 1m18.003594843s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:24:57.272413   14731 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 10:24:57.273706   14731 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:24:57.274969   14731 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:24:57.276179   14731 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, metrics-server, helm-tiller, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, volcano, registry, csi-hostpath-driver, gcp-auth
	I0916 10:24:57.277503   14731 addons.go:510] duration metric: took 1m26.177945157s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd metrics-server helm-tiller storage-provisioner storage-provisioner-rancher inspektor-gadget volumesnapshots volcano registry csi-hostpath-driver gcp-auth]
	I0916 10:24:57.277539   14731 start.go:246] waiting for cluster config update ...
	I0916 10:24:57.277557   14731 start.go:255] writing updated cluster config ...
	I0916 10:24:57.277828   14731 exec_runner.go:51] Run: rm -f paused
	I0916 10:24:57.280918   14731 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:24:57.282289   14731 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:38:00 UTC. --
	Sep 16 10:24:58 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:58.030336094Z" level=info msg="ignoring event" container=063696e8a73aabc89418d2c58e71706ba02ccbbecf8ff00cbae4ce69ab4d8dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:25:38 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:25:38Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:25:40 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:25:40.013070122Z" level=info msg="ignoring event" container=285e9d3bf61063164576db1e8b56067f2715f3125c65a408fb460b33df4e0df3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:27:12 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:27:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836428Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836085Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.785558764Z" level=error msg="Error running exec 13e088d02d0a5f22acc5e5b1a4471ba70b2f244b367260c945e607695da23676 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799299215Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799311411Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.801146259Z" level=error msg="Error running exec 8124ff9355b2b195f4666e956e5c04835c7ab5bbca41ab5f07f5d54c9a438e8a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.997546489Z" level=info msg="ignoring event" container=f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:01 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:30:01Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860094779Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860112359Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.861900754Z" level=error msg="Error running exec 7325b4844d467316c92c35912814ef76ffc52ab0706fc16a141d2d4c86eec807 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:03 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:03.053613980Z" level=info msg="ignoring event" container=f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.355786042Z" level=info msg="ignoring event" container=bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.422842358Z" level=info msg="ignoring event" container=6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.489977617Z" level=info msg="ignoring event" container=bede25b8f44c47a7583d31e5f552ceb2818b45bf9b6e66175cefd80b6e4a1ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.585848075Z" level=info msg="ignoring event" container=8a0796a6fd139e34146729f05330e8554afd338b598fd53c135d700704cea580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:16 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:16.809464495Z" level=info msg="ignoring event" container=3902ec2c22c138271b7c612de2b2ec28e9b3e2406519c1a03ab3d1e1760a1146 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:36:28 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:36:28.322247254Z" level=info msg="ignoring event" container=1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:36:28 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:36:28.464407227Z" level=info msg="ignoring event" container=1d5dec60ab67acd84e750360030eddc13a9150ac9c006977978cdb19a2e6156b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:59 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:37:59.980163682Z" level=info msg="ignoring event" container=fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:38:00 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:38:00.122483210Z" level=info msg="ignoring event" container=4cc0471023071a3d36728e0fb6850e3fa91bc3294992e3a0df5a4b8dce1d050a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b806437d39cb5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 13 minutes ago      Running             gcp-auth                                 0                   872b837fda1bc       gcp-auth-89d5ffd79-wt6q9
	6b6303f81cb52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          13 minutes ago      Running             csi-snapshotter                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d549f78521f57       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          13 minutes ago      Running             csi-provisioner                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	9125db73d99e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            13 minutes ago      Running             liveness-probe                           0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	87c37483d2112       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           13 minutes ago      Running             hostpath                                 0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	cd42401f74b1d       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         13 minutes ago      Running             admission                                0                   d5cc1eab65661       volcano-admission-77d7d48b68-t975d
	0c0ddb709904f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                13 minutes ago      Running             node-driver-registrar                    0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	b0782903176d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              13 minutes ago      Running             csi-resizer                              0                   fb9dfe220b3dc       csi-hostpath-resizer-0
	4edaa9f0351e1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             13 minutes ago      Running             csi-attacher                             0                   fa27205224e9f       csi-hostpath-attacher-0
	f0ce5f8efdc2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   13 minutes ago      Running             csi-external-health-monitor-controller   0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d35f343c48bcb       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               13 minutes ago      Running             volcano-scheduler                        0                   ca6d7d9980376       volcano-scheduler-576bc46687-l88qd
	3fa7892ed6588       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      13 minutes ago      Running             volcano-controllers                      0                   1d8c71b5408cc       volcano-controllers-56675bb4d5-kd2r2
	23bdeff0c7c03       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         14 minutes ago      Exited              main                                     0                   2684a290edfd1       volcano-admission-init-4rd4m
	a7c6ba8b5b8e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago      Running             volume-snapshot-controller               0                   2a9eff5290337       snapshot-controller-56fcc65765-c729p
	59e2e493c17f7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago      Running             volume-snapshot-controller               0                   a62d801d6adc1       snapshot-controller-56fcc65765-hhv7d
	c5ee33602669d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       14 minutes ago      Running             local-path-provisioner                   0                   6fcb08908435e       local-path-provisioner-86d989889c-xpx7m
	c2bb3772d49b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        14 minutes ago      Running             yakd                                     0                   54361ea6661c2       yakd-dashboard-67d98fc6b-ggfmd
	566744d15c91f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               14 minutes ago      Running             cloud-spanner-emulator                   0                   2ce78388a8512       cloud-spanner-emulator-769b77f747-7x6cj
	1cb6e9270416d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     14 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6c5f84705a086       nvidia-device-plugin-daemonset-dcrh9
	e19218997c830       6e38f40d628db                                                                                                                                14 minutes ago      Running             storage-provisioner                      0                   debc24e02ca98       storage-provisioner
	e0a1b4e718aed       c69fa2e9cbf5f                                                                                                                                14 minutes ago      Running             coredns                                  0                   44104ce9decd6       coredns-7c65d6cfc9-vlmkz
	95dfe8f64bc6f       60c005f310ff3                                                                                                                                14 minutes ago      Running             kube-proxy                               0                   3eddba63436f7       kube-proxy-gm7kv
	236092569fa7f       2e96e5913fc06                                                                                                                                14 minutes ago      Running             etcd                                     0                   f4c192de28c8e       etcd-ubuntu-20-agent-2
	f656d4b3e221b       6bab7719df100                                                                                                                                14 minutes ago      Running             kube-apiserver                           0                   13c6d1481d7e3       kube-apiserver-ubuntu-20-agent-2
	abadc50dd44f1       175ffd71cce3d                                                                                                                                14 minutes ago      Running             kube-controller-manager                  0                   2dd1e926360a9       kube-controller-manager-ubuntu-20-agent-2
	0412032e5006c       9aa1fad941575                                                                                                                                14 minutes ago      Running             kube-scheduler                           0                   b7f61176a82d0       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [e0a1b4e718ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59960 - 9097 "HINFO IN 5932384522844147917.1993008146596938559. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018267326s
	[INFO] 10.244.0.24:39221 - 38983 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000387765s
	[INFO] 10.244.0.24:57453 - 43799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000481367s
	[INFO] 10.244.0.24:56558 - 1121 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126982s
	[INFO] 10.244.0.24:37367 - 64790 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137381s
	[INFO] 10.244.0.24:53874 - 61210 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129517s
	[INFO] 10.244.0.24:35488 - 47376 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167054s
	[INFO] 10.244.0.24:39756 - 34231 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003382584s
	[INFO] 10.244.0.24:42692 - 8269 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003496461s
	[INFO] 10.244.0.24:40495 - 49254 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00344128s
	[INFO] 10.244.0.24:54381 - 40672 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003513746s
	[INFO] 10.244.0.24:45458 - 51280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002837809s
	[INFO] 10.244.0.24:39080 - 48381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003158709s
	[INFO] 10.244.0.24:49164 - 30651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00123377s
	[INFO] 10.244.0.24:33687 - 1000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001779254s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:35:38 +0000   Mon, 16 Sep 2024 10:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7x6cj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  gcp-auth                    gcp-auth-89d5ffd79-wt6q9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-vlmkz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-x6gtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-gm7kv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-dcrh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-c729p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-hhv7d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-xpx7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-admission-77d7d48b68-t975d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-controllers-56675bb4d5-kd2r2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-scheduler-576bc46687-l88qd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggfmd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             298Mi (0%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x6 over 14m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 4f 68 84 7c 26 08 06
	[  +0.029810] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 4a d1 e3 09 35 08 06
	[  +2.541456] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 35 1c 77 2c 6a 08 06
	[Sep16 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 2e 0e e0 53 6a 08 06
	[  +1.979621] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	
	
	==> etcd [236092569fa7] <==
	{"level":"info","ts":"2024-09-16T10:23:22.169311Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.169894Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:23:22.169903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.169924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:23:22.170145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:23:22.170188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170298Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.171038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:23:22.172233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:23:34.396500Z","caller":"traceutil/trace.go:171","msg":"trace[1443924902] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"122.443714ms","start":"2024-09-16T10:23:34.274027Z","end":"2024-09-16T10:23:34.396470Z","steps":["trace[1443924902] 'process raft request'  (duration: 42.860188ms)","trace[1443924902] 'compare'  (duration: 79.401186ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:23:34.396568Z","caller":"traceutil/trace.go:171","msg":"trace[1914523289] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"119.254337ms","start":"2024-09-16T10:23:34.277291Z","end":"2024-09-16T10:23:34.396545Z","steps":["trace[1914523289] 'process raft request'  (duration: 119.164267ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396664Z","caller":"traceutil/trace.go:171","msg":"trace[551861205] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"121.694141ms","start":"2024-09-16T10:23:34.274951Z","end":"2024-09-16T10:23:34.396645Z","steps":["trace[551861205] 'process raft request'  (duration: 121.454274ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396765Z","caller":"traceutil/trace.go:171","msg":"trace[612276300] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"117.724007ms","start":"2024-09-16T10:23:34.279030Z","end":"2024-09-16T10:23:34.396754Z","steps":["trace[612276300] 'process raft request'  (duration: 117.466969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396775Z","caller":"traceutil/trace.go:171","msg":"trace[485760124] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"107.084096ms","start":"2024-09-16T10:23:34.289681Z","end":"2024-09-16T10:23:34.396765Z","steps":["trace[485760124] 'process raft request'  (duration: 106.857041ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396851Z","caller":"traceutil/trace.go:171","msg":"trace[655456638] linearizableReadLoop","detail":"{readStateIndex:770; appliedIndex:767; }","duration":"117.963693ms","start":"2024-09-16T10:23:34.278878Z","end":"2024-09-16T10:23:34.396842Z","steps":["trace[655456638] 'read index received'  (duration: 5.820633ms)","trace[655456638] 'applied index is now lower than readState.Index'  (duration: 112.141241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:23:34.396925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.026308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:23:34.396979Z","caller":"traceutil/trace.go:171","msg":"trace[1000991150] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate; range_end:; response_count:0; response_revision:752; }","duration":"118.092731ms","start":"2024-09-16T10:23:34.278875Z","end":"2024-09-16T10:23:34.396968Z","steps":["trace[1000991150] 'agreement among raft nodes before linearized reading'  (duration: 118.006643ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:38.471576Z","caller":"traceutil/trace.go:171","msg":"trace[1536302833] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"154.211147ms","start":"2024-09-16T10:23:38.317339Z","end":"2024-09-16T10:23:38.471550Z","steps":["trace[1536302833] 'process raft request'  (duration: 154.053853ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:22.188338Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2024-09-16T10:33:22.212714Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1554,"took":"23.934179ms","hash":4226216058,"current-db-size-bytes":7352320,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":3911680,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-16T10:33:22.212758Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226216058,"revision":1554,"compact-revision":-1}
	
	
	==> gcp-auth [b806437d39cb] <==
	2024/09/16 10:24:56 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:38:00 up 20 min,  0 users,  load average: 0.29, 0.20, 0.18
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f656d4b3e221] <==
	W0916 10:24:04.623446       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:05.663512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:06.687369       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:07.741783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:08.796077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:09.892806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.278243       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.278280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.279887       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.290102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.290145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.291730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.911493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:11.942936       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:13.040622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:14.059340       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:20.272187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:20.272230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.287211       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.287254       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.296283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.296314       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	I0916 10:30:16.763857       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:30:17.782395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:36:44.202861       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [abadc50dd44f] <==
	W0916 10:30:52.834739       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:30:52.834781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:31:29.517193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:31:29.517235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:32:14.237055       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:32:14.237103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:04.260642       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:04.260689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:49.953230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:49.953271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:34:30.366531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:34:30.366573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:18.546778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:18.546822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:35:38.907117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	W0916 10:36:03.761315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:03.761365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:36:27.239533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.183µs"
	W0916 10:36:38.611788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:38.611834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:10.128172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:10.128213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:42.411738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:42.411793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:59.945055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="32.509µs"
	
	
	==> kube-proxy [95dfe8f64bc6] <==
	I0916 10:23:31.205838       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:31.406402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:23:31.406455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:31.489030       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:31.489102       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:31.508985       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:31.509483       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:31.509513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:31.539926       1 config.go:199] "Starting service config controller"
	I0916 10:23:31.540054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:31.559259       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:31.559278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:31.559824       1 config.go:328] "Starting node config controller"
	I0916 10:23:31.559836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:31.641834       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:31.660551       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:23:31.660598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0412032e5006] <==
	W0916 10:23:23.040568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:23.040650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.040660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:23.040674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:23.040716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.040756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.848417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.848457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.947205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.947244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.963782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:23.963827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.018222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:23:24.018276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.056374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:24.056418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.187965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:24.188004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.200436       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:24.200484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:23:24.239846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:24.239894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:27.139487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:38:00 UTC. --
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.054072   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4" (OuterVolumeSpecName: "kube-api-access-bdbd4") pod "c0a97873-e0c3-41a1-af0b-2ece8d95b20a" (UID: "c0a97873-e0c3-41a1-af0b-2ece8d95b20a"). InnerVolumeSpecName "kube-api-access-bdbd4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.059883   16162 scope.go:117] "RemoveContainer" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f"
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152877   16162 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-modules\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152906   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bdbd4\" (UniqueName: \"kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152918   16162 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-cgroup\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152930   16162 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-bpffs\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.391044   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a" path="/var/lib/kubelet/pods/c0a97873-e0c3-41a1-af0b-2ece8d95b20a/volumes"
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.624852   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd555\" (UniqueName: \"kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555\") pod \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\" (UID: \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\") "
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.624912   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir\") pod \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\" (UID: \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\") "
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.625177   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "1d335baf-98ff-41fd-9b89-ddd333da0dc4" (UID: "1d335baf-98ff-41fd-9b89-ddd333da0dc4"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.626978   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555" (OuterVolumeSpecName: "kube-api-access-jd555") pod "1d335baf-98ff-41fd-9b89-ddd333da0dc4" (UID: "1d335baf-98ff-41fd-9b89-ddd333da0dc4"). InnerVolumeSpecName "kube-api-access-jd555". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.725323   16162 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.725365   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jd555\" (UniqueName: \"kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.333823   16162 scope.go:117] "RemoveContainer" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.350814   16162 scope.go:117] "RemoveContainer" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: E0916 10:36:29.351844   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.351896   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"} err="failed to get container status \"1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.389305   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d335baf-98ff-41fd-9b89-ddd333da0dc4" path="/var/lib/kubelet/pods/1d335baf-98ff-41fd-9b89-ddd333da0dc4/volumes"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.341360   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlk7w\" (UniqueName: \"kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w\") pod \"456f019d-09af-4e09-9db8-cda9eda20ea3\" (UID: \"456f019d-09af-4e09-9db8-cda9eda20ea3\") "
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.343712   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w" (OuterVolumeSpecName: "kube-api-access-nlk7w") pod "456f019d-09af-4e09-9db8-cda9eda20ea3" (UID: "456f019d-09af-4e09-9db8-cda9eda20ea3"). InnerVolumeSpecName "kube-api-access-nlk7w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.373318   16162 scope.go:117] "RemoveContainer" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.392731   16162 scope.go:117] "RemoveContainer" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: E0916 10:38:00.393535   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.393576   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"} err="failed to get container status \"fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602\": rpc error: code = Unknown desc = Error response from daemon: No such container: fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.441998   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nlk7w\" (UniqueName: \"kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	
	
	==> storage-provisioner [e19218997c83] <==
	I0916 10:23:33.807788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:23:33.819755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:23:33.821506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:23:33.836239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:23:33.837177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	I0916 10:23:33.840556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"272307eb-dbc1-400e-a5a3-6595c2b694d1", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407 became leader
	I0916 10:23:33.937802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (303.728µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/HelmTiller (92.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (361.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.36526ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:570: (dbg) Non-zero exit: kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (364.174µs)
addons_test.go:572: creating sample PVC with kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (246.668µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.746µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (373.409µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.751µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (556.273µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (31.085719ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.448µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.084µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.816µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (535.132µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.72µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.309µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.412µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.81µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.709µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.149µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.543µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.163µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.498µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.766µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.096µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.29µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (380.599µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.732µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.439µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.301µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.618µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.869µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.675µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.047µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.459µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (600.132µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.779µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.337µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.984µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.12µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.863µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.566µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.225µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (369.841µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.454µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (382.205µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.977µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.654µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (540.813µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.562µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.578µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.024µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.776µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.774µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (391.889µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.275µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.76µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.371µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.012µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.115µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.214µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (375.198µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.828µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.705µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (526.484µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (352.138µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (395.067µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (381.445µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.843µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.545µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (395.272µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.455µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (15.489853ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.708µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (389.805µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.268µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (374.368µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (367.154µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (533.36µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.228µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.188µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.499µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.496µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.742µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.531µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.025µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.637µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.869µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.05µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.399µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.873µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (412.451µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.938µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (376.394µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.19µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (508.042µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.439µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.448µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.986µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.238µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.775µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.112µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.531µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.733µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.053µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.066µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.971µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.169µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.481µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.343µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.117µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.903µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.903µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.067µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.393µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.622µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (403.418µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.272µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.37µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.212µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.581µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.257µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.342µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.577µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.31µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.76µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.847µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.131µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (564.104µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.184µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.549µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.786µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.14µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.947µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.007µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.712µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.606µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.525µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.039µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.492µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.454µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.595µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.697µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (416.799µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.719µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.533µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.258µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.988µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.197µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (557.039µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.944µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.285µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.847µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.286µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.022µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.742µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.959µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.619µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.78µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.308µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.135µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.706µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.174µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (516.24µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.987µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.148µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.275µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509.836µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.062µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.723µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (549.758µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.31µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.831µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.361µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.811µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.051µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.019µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.939µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.505µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.679µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.922µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.246µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.015µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.815µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.412µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (413.533µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.6µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.275µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.444µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.978µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.026µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.974µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.186µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.877µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.617µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (392.488µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.972µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.123µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.143µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.156µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.133µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (19.469199ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.249µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.743µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.099µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.783µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.081µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.208µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.094µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.397µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.308µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.952µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.76µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.559µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.007µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.08µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.62µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.822µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.343µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.137µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.922µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.684µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.298µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (407.707µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.593µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.824µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.439µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.956µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (510.986µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.21µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.577µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.174µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.531µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (496.953µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.571µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.974µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (619.491µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.444µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.1µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.751µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.913µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.99µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.314µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (525.147µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (572.337µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.003µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (422.073µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.892µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.534µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.221µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.557µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.793µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.996µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.262µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (410.741µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.887µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.384µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.213µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.067µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.837µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.579µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.063µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.799µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.44µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.931µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.571µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (551.524µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.389µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.247µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.848µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (541.524µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.731µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.06µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.301µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.438µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.393µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (552.081µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.083µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (512.611µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.393µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.536µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.639µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.662µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.074µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.994µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.033µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (517.302µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.463µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (525.907µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (556.52µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.028µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.213µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.614µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.863µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.851µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.904µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.783µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.672µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.491µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.378µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.225µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (493.413µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.051µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.326µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (418.028µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.946µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.513µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.882µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.296µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.379µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.221µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.255µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.452µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.241µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.038µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.473µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.462µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.275µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.127µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.307µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.071µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.372µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.431µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (610.241µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.754µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.662µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.872µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.448µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.698µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.136µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.33µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.369µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.634µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.678µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.941µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.349µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.141µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.031µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (591.862µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (443.371µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.276µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.513µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (523.019µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.005µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.475µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.301µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.915µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.788µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.432µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (457.908µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (519.587µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.033µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.252µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (464.129µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (434.2µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.183µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (556.953µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509.475µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.041µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.185µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.239µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.292µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.318µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.41µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: context deadline exceeded
addons_test.go:576: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40127               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|         | helm-tiller --alsologtostderr        |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:13.140706   14731 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:13.140813   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140821   14731 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:13.140825   14731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:13.140993   14731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:23:13.141565   14731 out.go:352] Setting JSON to false
	I0916 10:23:13.142443   14731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":344,"bootTime":1726481849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:13.142536   14731 start.go:139] virtualization: kvm guest
	I0916 10:23:13.144838   14731 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:23:13.146162   14731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:13.146197   14731 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:13.146202   14731 notify.go:220] Checking for updates...
	I0916 10:23:13.148646   14731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:13.149886   14731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:13.151023   14731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:23:13.152258   14731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:13.153558   14731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:13.154983   14731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:13.165097   14731 out.go:177] * Using the none driver based on user configuration
	I0916 10:23:13.166355   14731 start.go:297] selected driver: none
	I0916 10:23:13.166366   14731 start.go:901] validating driver "none" against <nil>
	I0916 10:23:13.166376   14731 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:13.166401   14731 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:23:13.166708   14731 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:23:13.167363   14731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:13.167640   14731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:13.167685   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:13.167734   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:13.167744   14731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:13.167818   14731 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:13.169383   14731 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:23:13.171024   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.171056   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:13.171177   14731 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:13.171205   14731 start.go:364] duration metric: took 15.507µs to acquireMachinesLock for "minikube"
	I0916 10:23:13.171217   14731 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:13.171280   14731 start.go:125] createHost starting for "" (driver="none")
	I0916 10:23:13.173420   14731 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:23:13.174682   14731 exec_runner.go:51] Run: systemctl --version
	I0916 10:23:13.177006   14731 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:23:13.177034   14731 client.go:168] LocalClient.Create starting
	I0916 10:23:13.177131   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:23:13.177168   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177190   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177253   14731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:23:13.177275   14731 main.go:141] libmachine: Decoding PEM data...
	I0916 10:23:13.177285   14731 main.go:141] libmachine: Parsing certificate...
	I0916 10:23:13.177573   14731 client.go:171] duration metric: took 533.456µs to LocalClient.Create
	I0916 10:23:13.177599   14731 start.go:167] duration metric: took 593.576µs to libmachine.API.Create "minikube"
	I0916 10:23:13.177608   14731 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:23:13.177642   14731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:13.177683   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:13.187236   14731 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:13.187263   14731 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:13.187275   14731 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:13.189044   14731 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:23:13.190345   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:23:13.190401   14731 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:23:13.190422   14731 start.go:296] duration metric: took 12.809081ms for postStartSetup
	I0916 10:23:13.191528   14731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:23:13.191738   14731 start.go:128] duration metric: took 20.449605ms to createHost
	I0916 10:23:13.191749   14731 start.go:83] releasing machines lock for "minikube", held for 20.535411ms
	I0916 10:23:13.192580   14731 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:13.192644   14731 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:23:13.194590   14731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:23:13.194649   14731 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:13.202734   14731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:23:13.202757   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.202792   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.202889   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.222327   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:13.230703   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:13.239020   14731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:13.239101   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:13.248805   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.257191   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:13.265887   14731 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:13.274565   14731 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:13.283401   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:13.292383   14731 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:13.300868   14731 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:13.309031   14731 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:13.315780   14731 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:13.322874   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:13.538903   14731 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:23:13.606063   14731 start.go:495] detecting cgroup driver to use...
	I0916 10:23:13.606117   14731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:13.606219   14731 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:13.625810   14731 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:23:13.626697   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:23:13.634078   14731 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:23:13.634095   14731 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.634125   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.641943   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:23:13.642067   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube17162235 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:23:13.649525   14731 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:23:13.864371   14731 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:23:14.080198   14731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:23:14.080354   14731 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:23:14.080369   14731 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:23:14.080415   14731 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:23:14.088510   14731 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:23:14.088647   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258152288 /etc/docker/daemon.json
	I0916 10:23:14.096396   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:14.312903   14731 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:23:14.614492   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:23:14.624711   14731 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:23:14.641378   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:14.651444   14731 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:23:14.875541   14731 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:23:15.086384   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.300370   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:23:15.313951   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:23:15.324456   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:15.540454   14731 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:23:15.606406   14731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:23:15.606476   14731 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:23:15.607900   14731 start.go:563] Will wait 60s for crictl version
	I0916 10:23:15.607956   14731 exec_runner.go:51] Run: which crictl
	I0916 10:23:15.608880   14731 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:23:15.638324   14731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:23:15.638393   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.658714   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:15.681662   14731 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:23:15.681764   14731 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:15.684836   14731 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:23:15.686171   14731 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:15.686280   14731 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:15.686290   14731 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:23:15.686371   14731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:23:15.686410   14731 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:23:15.733026   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.733051   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:15.733070   14731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:15.733090   14731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:15.733254   14731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:15.733305   14731 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.741208   14731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:23:15.741251   14731 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:15.748963   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:23:15.748989   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:23:15.748971   14731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:23:15.749021   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:15.749048   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:23:15.749023   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:23:15.759703   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:23:15.804184   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000397322 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:23:15.808532   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3573748997 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:23:15.825059   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036820018 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:23:15.890865   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:15.899083   14731 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:23:15.899106   14731 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.899146   14731 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.906895   14731 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:23:15.907034   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686635375 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:23:15.914549   14731 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:23:15.914568   14731 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:23:15.914597   14731 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:23:15.921424   14731 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:15.921543   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube124460998 /lib/systemd/system/kubelet.service
	I0916 10:23:15.930481   14731 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:23:15.930611   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4089828324 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:23:15.938132   14731 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:15.939361   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:16.143380   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:16.158863   14731 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:23:16.158890   14731 certs.go:194] generating shared ca certs ...
	I0916 10:23:16.158911   14731 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.159062   14731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:23:16.159122   14731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:23:16.159135   14731 certs.go:256] generating profile certs ...
	I0916 10:23:16.159199   14731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:23:16.159225   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:23:16.405613   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:23:16.405642   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405782   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:23:16.405793   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.405856   14731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:23:16.405870   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:23:16.569943   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:23:16.569971   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570095   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:23:16.570104   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.570154   14731 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:23:16.570220   14731 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:23:16.570270   14731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:23:16.570283   14731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:23:16.840205   14731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:23:16.840238   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840383   14731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:23:16.840393   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:16.840537   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:23:16.840569   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:16.840594   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:16.840624   14731 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:23:16.841173   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:16.841296   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube746649098 /var/lib/minikube/certs/ca.crt
	I0916 10:23:16.850974   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:23:16.851102   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216583324 /var/lib/minikube/certs/ca.key
	I0916 10:23:16.859052   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:16.859162   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2429656602 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:23:16.867993   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:16.868122   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31356631 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:23:16.876316   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:23:16.876432   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2172809749 /var/lib/minikube/certs/apiserver.crt
	I0916 10:23:16.883937   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:16.884043   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3752504884 /var/lib/minikube/certs/apiserver.key
	I0916 10:23:16.891211   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:16.891348   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1611886685 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:23:16.898521   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:16.898630   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414896728 /var/lib/minikube/certs/proxy-client.key
	I0916 10:23:16.905794   14731 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:23:16.905813   14731 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.905843   14731 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.913039   14731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:16.913160   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817740740 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.920335   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:16.920430   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902791778 /var/lib/minikube/kubeconfig
	I0916 10:23:16.929199   14731 exec_runner.go:51] Run: openssl version
	I0916 10:23:16.931944   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:16.940176   14731 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941576   14731 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.941622   14731 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:16.944402   14731 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:16.952213   14731 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:16.953336   14731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:16.953373   14731 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:16.953468   14731 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:23:16.968833   14731 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:16.976751   14731 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:16.984440   14731 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:23:17.005001   14731 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:17.013500   14731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:17.013523   14731 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:17.013559   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:17.021530   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:17.021577   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:17.029363   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:17.038339   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:17.038392   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:17.046433   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.055974   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:17.056021   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:17.064002   14731 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:17.087369   14731 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:17.087421   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:17.094700   14731 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:23:17.125739   14731 kubeadm.go:310] W0916 10:23:17.125617   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.126248   14731 kubeadm.go:310] W0916 10:23:17.126207   15616 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:17.127875   14731 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:17.127925   14731 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:17.218197   14731 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:17.218241   14731 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:17.218245   14731 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:17.218250   14731 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:17.228659   14731 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:17.231432   14731 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:17.231476   14731 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:17.231492   14731 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:17.409888   14731 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:17.475990   14731 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:17.539491   14731 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:17.796104   14731 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:18.073234   14731 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:18.073357   14731 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.366388   14731 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:18.366499   14731 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:23:18.555987   14731 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:18.639688   14731 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:18.710297   14731 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:18.710445   14731 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:19.161742   14731 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:19.258436   14731 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:19.315076   14731 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:19.572576   14731 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:19.765615   14731 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:19.766182   14731 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:19.768469   14731 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:19.770925   14731 out.go:235]   - Booting up control plane ...
	I0916 10:23:19.770956   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:19.770979   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:19.770988   14731 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:19.791511   14731 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:19.797034   14731 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:19.797064   14731 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:20.020707   14731 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:20.020728   14731 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:20.522367   14731 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.615965ms
	I0916 10:23:20.522388   14731 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:24.524089   14731 kubeadm.go:310] [api-check] The API server is healthy after 4.001711526s
	I0916 10:23:24.534645   14731 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:24.545508   14731 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:24.561586   14731 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:24.561610   14731 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:24.569540   14731 kubeadm.go:310] [bootstrap-token] Using token: 60y8iu.vk0rxdhc25utw4uo
	I0916 10:23:24.571078   14731 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:24.571112   14731 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:24.575563   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:24.581879   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:24.584635   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:24.587409   14731 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:24.589877   14731 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:24.929369   14731 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:25.351323   14731 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:25.929753   14731 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:25.930651   14731 kubeadm.go:310] 
	I0916 10:23:25.930669   14731 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:25.930673   14731 kubeadm.go:310] 
	I0916 10:23:25.930677   14731 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:25.930693   14731 kubeadm.go:310] 
	I0916 10:23:25.930705   14731 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:25.930710   14731 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:25.930713   14731 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:25.930717   14731 kubeadm.go:310] 
	I0916 10:23:25.930721   14731 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:25.930725   14731 kubeadm.go:310] 
	I0916 10:23:25.930730   14731 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:25.930737   14731 kubeadm.go:310] 
	I0916 10:23:25.930742   14731 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:25.930749   14731 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:25.930753   14731 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:25.930759   14731 kubeadm.go:310] 
	I0916 10:23:25.930763   14731 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:25.930765   14731 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:25.930768   14731 kubeadm.go:310] 
	I0916 10:23:25.930770   14731 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930773   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:23:25.930778   14731 kubeadm.go:310] 	--control-plane 
	I0916 10:23:25.930781   14731 kubeadm.go:310] 
	I0916 10:23:25.930784   14731 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:25.930791   14731 kubeadm.go:310] 
	I0916 10:23:25.930794   14731 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 60y8iu.vk0rxdhc25utw4uo \
	I0916 10:23:25.930798   14731 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:23:25.933502   14731 cni.go:84] Creating CNI manager for ""
	I0916 10:23:25.933525   14731 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:25.935106   14731 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:23:25.936272   14731 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:23:25.946405   14731 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:23:25.946528   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951121141 /etc/cni/net.d/1-k8s.conflist
	I0916 10:23:25.957597   14731 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:25.957652   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:25.957691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:23:25.966602   14731 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:26.024809   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:26.524979   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.025101   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:27.525561   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.024962   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:28.525631   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.025594   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:29.525691   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.024918   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:30.524850   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.024821   14731 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:31.098521   14731 kubeadm.go:1113] duration metric: took 5.140910239s to wait for elevateKubeSystemPrivileges
	I0916 10:23:31.098550   14731 kubeadm.go:394] duration metric: took 14.145180358s to StartCluster
	I0916 10:23:31.098572   14731 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.098640   14731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:23:31.099273   14731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:31.099484   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:31.099563   14731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:31.099694   14731 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:23:31.099713   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.099725   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:23:31.099724   14731 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 10:23:31.099749   14731 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 10:23:31.099762   14731 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 10:23:31.099777   14731 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 10:23:31.099788   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.099807   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100187   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100203   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100227   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100376   14731 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 10:23:31.100405   14731 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 10:23:31.100436   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.100438   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100445   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.100453   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100459   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.100485   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100491   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.100769   14731 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 10:23:31.100790   14731 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 10:23:31.100826   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.101070   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101090   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101123   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101267   14731 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 10:23:31.101295   14731 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 10:23:31.101510   14731 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 10:23:31.101527   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101535   14731 mustload.go:65] Loading cluster: minikube
	I0916 10:23:31.101541   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101572   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.101737   14731 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:23:31.101867   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.101887   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.101919   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102148   14731 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 10:23:31.102169   14731 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 10:23:31.102195   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.102220   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102233   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102253   14731 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 10:23:31.102265   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.102298   14731 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:31.102312   14731 out.go:177] * Configuring local host environment ...
	I0916 10:23:31.102789   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.102801   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.102825   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.103836   14731 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 10:23:31.103861   14731 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 10:23:31.103905   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104241   14731 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:23:31.104257   14731 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:23:31.104275   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104742   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104753   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.104763   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104773   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.104784   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104812   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.104956   14731 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 10:23:31.102331   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104975   14731 addons.go:69] Setting registry=true in profile "minikube"
	I0916 10:23:31.104984   14731 addons.go:234] Setting addon registry=true in "minikube"
	I0916 10:23:31.105000   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.105157   14731 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 10:23:31.105184   14731 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 10:23:31.105213   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.104967   14731 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 10:23:31.105323   14731 host.go:66] Checking if "minikube" exists ...
	W0916 10:23:31.106873   14731 out.go:270] * 
	W0916 10:23:31.106888   14731 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:23:31.106896   14731 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:23:31.106903   14731 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:23:31.106909   14731 out.go:270] * 
	W0916 10:23:31.106955   14731 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:23:31.106962   14731 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:23:31.106971   14731 out.go:270] * 
	W0916 10:23:31.106995   14731 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:23:31.107002   14731 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:23:31.107009   14731 out.go:270] * 
	W0916 10:23:31.107018   14731 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:23:31.107045   14731 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:31.107984   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.107997   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.108026   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.108454   14731 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:31.109770   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.109792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.109828   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.110054   14731 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:23:31.124712   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.127087   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.128504   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.130104   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.138756   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.138792   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.138831   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.139721   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.139749   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.139779   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.142090   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142122   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142129   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142151   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.142345   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.156934   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.156999   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.158343   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.158400   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.160580   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.163820   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.169364   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.171885   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.171953   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.173802   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.173849   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.174374   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.174420   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176241   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.176292   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.176846   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.185299   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.186516   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.186575   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.194708   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.194738   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.194977   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.195032   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199863   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199893   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.199933   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.199946   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.200834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.200854   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.201607   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.201750   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.205007   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.205028   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.205039   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.205094   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.206485   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.210587   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.212372   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.212395   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.213745   14731 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:31.214160   14731 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:23:31.214415   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.216499   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.216520   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.216547   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.217076   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.217112   14731 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:31.217909   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube143406645 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:31.218842   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.219226   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.219253   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.220512   14731 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:31.220867   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.221546   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.223173   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.221979   14731 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.223461   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:31.223768   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3150586776 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.225359   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.227613   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.227660   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.229063   14731 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:31.229334   14731 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:31.230849   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.230883   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231177   14731 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:31.231657   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.231693   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234554   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.231695   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.234684   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.232274   14731 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:31.235888   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.236046   14731 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236071   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:31.236209   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3107188705 /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.236904   14731 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:31.238542   14731 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.238573   14731 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:31.238771   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2095578904 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:31.239882   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.240045   14731 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:31.244446   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.245954   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:31.246834   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252064   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.246956   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:31.252578   14731 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.252624   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:31.246990   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.252873   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247002   14731 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:31.253137   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube95020260 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.247038   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253167   14731 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:31.253286   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2405129530 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:31.253617   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.253668   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.247061   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.253722   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.247236   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:31.255868   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.255894   14731 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:31.255954   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.255976   14731 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:31.256002   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671809590 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:31.256098   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1236849984 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:31.257119   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:31.257771   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:31.259551   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259704   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.259965   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.260128   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.260751   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.261489   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.261250   14731 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:31.261394   14731 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 10:23:31.262031   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.262778   14731 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:31.262782   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:31.262800   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:31.262829   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:31.262833   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:31.264514   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264537   14731 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:23:31.264545   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264584   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.264768   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:31.264924   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.264959   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:31.265088   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2364820269 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:31.266759   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.268033   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:31.268086   14731 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:31.269452   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:31.269500   14731 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:31.272346   14731 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272373   14731 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:31.272497   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754220183 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:31.272890   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:31.275160   14731 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275188   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:31.275361   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2480903723 /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:31.275532   14731 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:31.277158   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277179   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:31.277664   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube478526718 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:31.277859   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.277882   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:31.278022   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636867839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:31.290799   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.290835   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:31.291218   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3814086991 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:31.295428   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.295459   14731 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:31.295604   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3740101312 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:31.306392   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.306425   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.311213   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.311248   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:31.311424   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube747122049 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:31.312994   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.313036   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:31.317835   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.318230   14731 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:31.323578   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube338558244 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:31.341814   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.341846   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:31.341971   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323528791 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:31.342204   14731 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.342226   14731 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:31.342566   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.342625   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.342837   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:31.342890   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube292318438 /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:31.343078   14731 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.343101   14731 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:31.343219   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032243386 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:31.358435   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.358525   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358549   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:31.358693   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2881932452 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:31.358881   14731 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:31.359009   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1282728706 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:31.359505   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:31.366545   14731 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.366587   14731 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:31.366713   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1171915216 /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:31.378664   14731 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.378695   14731 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:31.378815   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473351497 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.380393   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.380417   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.382937   14731 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.382966   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:31.383096   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2529455688 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.384304   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.384326   14731 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:31.384438   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881397 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:31.385231   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.385271   14731 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385284   14731 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:23:31.385292   14731 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.385328   14731 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.387805   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.387835   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:31.387939   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332358551 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:31.390197   14731 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.390227   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:31.390366   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46497832 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.397672   14731 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:31.397951   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3186992100 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.403599   14731 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.403630   14731 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:31.403754   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445986553 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.409076   14731 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.409115   14731 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:31.409283   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1651200957 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:31.415599   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.415621   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:31.415721   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918202348 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:31.417404   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:31.423447   14731 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423472   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:23:31.423586   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube419582909 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.423765   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:31.423804   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:31.436943   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:31.438121   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:31.443433   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.443523   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:31.443757   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41635707 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:31.462088   14731 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462127   14731 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:23:31.462266   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805595243 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:23:31.462657   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:31.462783   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160047024 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.464607   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:31.476223   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:31.479433   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:31.479463   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:31.482688   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:31.487583   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:31.490669   14731 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:31.492378   14731 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:31.493942   14731 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.493975   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:31.494108   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281912972 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.499328   14731 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499357   14731 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:23:31.499374   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.499400   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:31.499487   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719508217 /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:23:31.499527   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411641332 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:31.518103   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:31.577544   14731 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.577588   14731 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:23:31.577779   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601059446 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:23:31.583317   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:31.651738   14731 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.651774   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:23:31.653267   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1921119500 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.672720   14731 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:23:31.786205   14731 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789214   14731 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:23:31.789238   14731 node_ready.go:38] duration metric: took 2.992874ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:23:31.789249   14731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:31.802669   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:31.813190   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.813232   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:23:31.813392   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591024036 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:23:31.863589   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:23:31.965015   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.965162   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:23:31.966268   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974451214 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:23:31.977982   14731 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:32.088850   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.088892   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:23:32.089762   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434131392 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:23:32.191154   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.191186   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:23:32.191329   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube332266551 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:23:32.242672   14731 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.242725   14731 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:23:32.243830   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2503739100 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.299481   14731 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 10:23:32.324442   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:23:32.403566   14731 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 10:23:32.489342   14731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 10:23:32.514409   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.096961786s)
	I0916 10:23:32.514451   14731 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 10:23:32.516449   14731 out.go:177] * Verifying registry addon...
	I0916 10:23:32.528963   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:23:32.532579   14731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:23:32.532675   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:32.570911   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.088181519s)
	I0916 10:23:32.907708   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.389561221s)
	I0916 10:23:32.966699   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.383338477s)
	I0916 10:23:33.052703   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.126489   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262849545s)
	I0916 10:23:33.178161   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.713502331s)
	W0916 10:23:33.178208   14731 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.178247   14731 retry.go:31] will retry after 159.834349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:23:33.338693   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:33.540389   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:33.809689   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:34.053876   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.539589   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:34.570200   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.231431807s)
	I0916 10:23:34.612191   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.252641903s)
	I0916 10:23:34.884849   14731 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.560344146s)
	I0916 10:23:34.884890   14731 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 10:23:34.886878   14731 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:23:34.890123   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:23:34.895733   14731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:23:34.895758   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.033190   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.396363   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:35.534375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:35.895151   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.035637   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.308497   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:36.395655   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:36.533207   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:36.895449   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.033542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.395180   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:37.533433   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:37.895384   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.033538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.473613   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:23:38.473795   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398753053 /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.474692   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:38.484004   14731 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:23:38.484134   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3434783837 /var/lib/minikube/google_cloud_project
	I0916 10:23:38.494551   14731 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 10:23:38.494595   14731 host.go:66] Checking if "minikube" exists ...
	I0916 10:23:38.495054   14731 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:23:38.495069   14731 api_server.go:166] Checking apiserver status ...
	I0916 10:23:38.495094   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:38.511610   14731 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/16036/cgroup
	I0916 10:23:38.520861   14731 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61"
	I0916 10:23:38.520914   14731 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/f656d4b3e221b5f665bfdf0a3e305c5c4a878e1f64a2cf70e16b9fac1024bd61/freezer.state
	I0916 10:23:38.529401   14731 api_server.go:204] freezer state: "THAWED"
	I0916 10:23:38.529444   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:38.599469   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:38.599542   14731 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:23:38.600327   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:38.656167   14731 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:23:38.735860   14731 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:38.798815   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.798859   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:23:38.798995   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2626597480 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:23:38.808091   14731 pod_ready.go:103] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:23:38.862000   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.862041   14731 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:23:38.862151   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046341520 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:23:38.872893   14731 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.872922   14731 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:23:38.873036   14731 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054254500 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.883326   14731 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:23:38.894333   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.033277   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.262619   14731 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 10:23:39.264955   14731 out.go:177] * Verifying gcp-auth addon...
	I0916 10:23:39.266807   14731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:23:39.268717   14731 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:23:39.310878   14731 pod_ready.go:98] pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310904   14731 pod_ready.go:82] duration metric: took 7.508146008s for pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace to be "Ready" ...
	E0916 10:23:39.310915   14731 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hd5hq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 10:23:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 10:23:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 10:23:32 +0000 UTC,FinishedAt:2024-09-16 10:23:38 +0000 UTC,ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bec8abc0b6e731cbae2c9715fb06ba9dc067208257528dd027a46790b7ec6a7f Started:0xc0003d52d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001cf62e0} {Name:kube-api-access-5lpx8 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001cf62f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 10:23:39.310924   14731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:39.395512   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:39.532567   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:39.894633   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.033580   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.394602   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.533200   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:40.815447   14731 pod_ready.go:93] pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.815468   14731 pod_ready.go:82] duration metric: took 1.504536219s for pod "coredns-7c65d6cfc9-vlmkz" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.815477   14731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819153   14731 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.819171   14731 pod_ready.go:82] duration metric: took 3.688538ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.819180   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822800   14731 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.822815   14731 pod_ready.go:82] duration metric: took 3.628798ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.822823   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826537   14731 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.826556   14731 pod_ready.go:82] duration metric: took 3.726729ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.826567   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.894014   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:40.906975   14731 pod_ready.go:93] pod "kube-proxy-gm7kv" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:40.906995   14731 pod_ready.go:82] duration metric: took 80.421296ms for pod "kube-proxy-gm7kv" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:40.907005   14731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.033182   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.307459   14731 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.307479   14731 pod_ready.go:82] duration metric: took 400.467827ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.307488   14731 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.394410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.532263   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:41.707267   14731 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace has status "Ready":"True"
	I0916 10:23:41.707293   14731 pod_ready.go:82] duration metric: took 399.79657ms for pod "nvidia-device-plugin-daemonset-dcrh9" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:41.707305   14731 pod_ready.go:39] duration metric: took 9.918041839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:41.707331   14731 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:23:41.707469   14731 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:23:41.727079   14731 api_server.go:72] duration metric: took 10.620002836s to wait for apiserver process to appear ...
	I0916 10:23:41.727105   14731 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:23:41.727130   14731 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:23:41.731666   14731 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:23:41.732551   14731 api_server.go:141] control plane version: v1.31.1
	I0916 10:23:41.732571   14731 api_server.go:131] duration metric: took 5.460229ms to wait for apiserver health ...
	I0916 10:23:41.732579   14731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:23:41.894027   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:41.998997   14731 system_pods.go:59] 17 kube-system pods found
	I0916 10:23:41.999033   14731 system_pods.go:61] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:41.999047   14731 system_pods.go:61] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:41.999057   14731 system_pods.go:61] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:41.999075   14731 system_pods.go:61] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:41.999087   14731 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:41.999092   14731 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:41.999096   14731 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:41.999099   14731 system_pods.go:61] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:41.999104   14731 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:41.999111   14731 system_pods.go:61] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:41.999114   14731 system_pods.go:61] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:41.999127   14731 system_pods.go:61] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:41.999138   14731 system_pods.go:61] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:41.999147   14731 system_pods.go:61] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999159   14731 system_pods.go:61] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:41.999164   14731 system_pods.go:61] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:41.999175   14731 system_pods.go:61] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:41.999182   14731 system_pods.go:74] duration metric: took 266.598276ms to wait for pod list to return data ...
	I0916 10:23:41.999196   14731 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:23:42.032591   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.106881   14731 default_sa.go:45] found service account: "default"
	I0916 10:23:42.106907   14731 default_sa.go:55] duration metric: took 107.703967ms for default service account to be created ...
	I0916 10:23:42.106918   14731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:23:42.375306   14731 system_pods.go:86] 17 kube-system pods found
	I0916 10:23:42.375339   14731 system_pods.go:89] "coredns-7c65d6cfc9-vlmkz" [11b1173b-6e2d-4f71-a52d-be0c2f12dc15] Running
	I0916 10:23:42.375347   14731 system_pods.go:89] "csi-hostpath-attacher-0" [bed7f975-4be1-44a8-87c5-c83ba4a48cd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:23:42.375355   14731 system_pods.go:89] "csi-hostpath-resizer-0" [c0a151ba-0d32-45d9-9cb0-4f4386a75794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:23:42.375362   14731 system_pods.go:89] "csi-hostpathplugin-x6gtw" [dbf37c43-7569-4133-ba69-a501241bc9e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:23:42.375367   14731 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6e000368-c8e8-4771-82fc-b72e9c25c9bb] Running
	I0916 10:23:42.375372   14731 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [2d6223cf-3743-4d4f-88a6-5e95d78ef2cc] Running
	I0916 10:23:42.375377   14731 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [5990b756-d569-4c65-ad0f-4c00ab948339] Running
	I0916 10:23:42.375382   14731 system_pods.go:89] "kube-proxy-gm7kv" [7723a3cd-8a65-4721-a1a7-26867bbb8e74] Running
	I0916 10:23:42.375385   14731 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7eb6ff06-fd8c-417e-a508-05d125215e07] Running
	I0916 10:23:42.375395   14731 system_pods.go:89] "metrics-server-84c5f94fbc-wfrnf" [1d335baf-98ff-41fd-9b89-ddd333da0dc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:23:42.375400   14731 system_pods.go:89] "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
	I0916 10:23:42.375405   14731 system_pods.go:89] "registry-66c9cd494c-9ffzq" [6713b497-3d64-4b59-8553-56cccb541c50] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:23:42.375411   14731 system_pods.go:89] "registry-proxy-qvvnb" [6b3bd156-0501-41a1-8285-865292e17bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:23:42.375417   14731 system_pods.go:89] "snapshot-controller-56fcc65765-c729p" [ec6ba009-b5f3-4961-9ecf-3495c3ba295e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375425   14731 system_pods.go:89] "snapshot-controller-56fcc65765-hhv7d" [9e7f5908-39a8-4edb-9a01-2132569d8e13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:23:42.375429   14731 system_pods.go:89] "storage-provisioner" [795eb696-3c31-4068-a065-04a60ef33740] Running
	I0916 10:23:42.375435   14731 system_pods.go:89] "tiller-deploy-b48cc5f79-jhzqk" [456f019d-09af-4e09-9db8-cda9eda20ea3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:23:42.375442   14731 system_pods.go:126] duration metric: took 268.518179ms to wait for k8s-apps to be running ...
	I0916 10:23:42.375451   14731 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:23:42.375494   14731 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:23:42.387115   14731 system_svc.go:56] duration metric: took 11.655134ms WaitForService to wait for kubelet
	I0916 10:23:42.387140   14731 kubeadm.go:582] duration metric: took 11.2800718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:42.387171   14731 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:23:42.394773   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:42.507386   14731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:23:42.507413   14731 node_conditions.go:123] node cpu capacity is 8
	I0916 10:23:42.507426   14731 node_conditions.go:105] duration metric: took 120.250263ms to run NodePressure ...
	I0916 10:23:42.507440   14731 start.go:241] waiting for startup goroutines ...
	I0916 10:23:42.531600   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:42.894380   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.032814   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.393764   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:43.533097   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:43.895538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.033018   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.394939   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:44.532533   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:44.923857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.032464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.395518   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:45.532657   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:45.894621   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.033157   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.394820   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:46.533142   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:46.894150   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.032554   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.394103   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:47.532755   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:47.923101   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.032246   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.393952   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:48.531988   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:48.894443   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.032216   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.395492   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:49.532583   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:49.894398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.033134   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.394173   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:50.532730   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:50.895356   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.032410   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.394499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:51.532834   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:51.894466   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.032976   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.393504   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:52.532575   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:23:52.895473   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.032897   14731 kapi.go:107] duration metric: took 20.503936091s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:23:53.395464   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:53.897663   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.395912   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:54.895542   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.394636   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:55.895289   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.394104   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:56.894685   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.394359   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:57.894369   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.394113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:58.895010   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.394765   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:23:59.895050   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.394699   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:00.893904   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.394519   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:01.893535   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.394889   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:02.894397   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.441082   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:03.893998   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.395141   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:04.895375   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.395269   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:05.896063   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.394972   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:06.894856   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.395279   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.895293   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.394857   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.896499   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.394125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.895033   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.395202   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.894724   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.394201   14731 kapi.go:107] duration metric: took 36.504077115s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:24:20.771019   14731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:20.771044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.269732   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:21.769379   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.270108   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:22.770020   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.270002   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:23.769993   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.270052   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:24.770494   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.270065   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:25.770030   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.269978   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:26.769822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.269485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:27.770749   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.270006   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:28.769786   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.269361   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:29.770193   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.270017   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:30.769639   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.269368   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:31.770132   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.270538   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:32.770922   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.270016   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:33.770707   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.269925   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:34.770343   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.270669   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:35.770484   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.269981   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:36.770067   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.269913   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:37.769999   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.269695   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:38.769660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.270376   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:39.770125   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.270113   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:40.769635   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.269392   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:41.770622   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.270727   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:42.771121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.270788   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:43.779792   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.269641   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:44.771197   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.270296   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:45.770234   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.270660   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:46.770461   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.270582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:47.770582   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.269826   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:48.769427   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.270745   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:49.769804   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.270843   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:50.770187   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.270064   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:51.769562   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.270917   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:52.769965   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.270218   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:53.770822   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.269777   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:54.770121   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.269909   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:55.770485   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.271044   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:56.770398   14731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:24:57.270401   14731 kapi.go:107] duration metric: took 1m18.003594843s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:24:57.272413   14731 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 10:24:57.273706   14731 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:24:57.274969   14731 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:24:57.276179   14731 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, metrics-server, helm-tiller, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, volcano, registry, csi-hostpath-driver, gcp-auth
	I0916 10:24:57.277503   14731 addons.go:510] duration metric: took 1m26.177945157s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd metrics-server helm-tiller storage-provisioner storage-provisioner-rancher inspektor-gadget volumesnapshots volcano registry csi-hostpath-driver gcp-auth]
	I0916 10:24:57.277539   14731 start.go:246] waiting for cluster config update ...
	I0916 10:24:57.277557   14731 start.go:255] writing updated cluster config ...
	I0916 10:24:57.277828   14731 exec_runner.go:51] Run: rm -f paused
	I0916 10:24:57.280918   14731 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:24:57.282289   14731 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:44:01 UTC. --
	Sep 16 10:24:58 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:24:58.030336094Z" level=info msg="ignoring event" container=063696e8a73aabc89418d2c58e71706ba02ccbbecf8ff00cbae4ce69ab4d8dc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:25:38 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:25:38Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:25:40 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:25:40.013070122Z" level=info msg="ignoring event" container=285e9d3bf61063164576db1e8b56067f2715f3125c65a408fb460b33df4e0df3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:27:12 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:27:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836428Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.783836085Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.785558764Z" level=error msg="Error running exec 13e088d02d0a5f22acc5e5b1a4471ba70b2f244b367260c945e607695da23676 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799299215Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.799311411Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.801146259Z" level=error msg="Error running exec 8124ff9355b2b195f4666e956e5c04835c7ab5bbca41ab5f07f5d54c9a438e8a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:27:13 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:27:13.997546489Z" level=info msg="ignoring event" container=f3640752ee05a9190e2874d8029d2950d2308625d94fdf6cd1e73a26f255bdf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:01 ubuntu-20-agent-2 cri-dockerd[15275]: time="2024-09-16T10:30:01Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860094779Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.860112359Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 10:30:02 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:02.861900754Z" level=error msg="Error running exec 7325b4844d467316c92c35912814ef76ffc52ab0706fc16a141d2d4c86eec807 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:03 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:03.053613980Z" level=info msg="ignoring event" container=f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.355786042Z" level=info msg="ignoring event" container=bc6d19b424172e382c8ba7fbb9063fdf8fc8ceb241702cb5abcca832ea72eeb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.422842358Z" level=info msg="ignoring event" container=6dbe08ccc6f03342db0d1c05b85fa6a4e41122b111bd5219212aadb3bac69295 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.489977617Z" level=info msg="ignoring event" container=bede25b8f44c47a7583d31e5f552ceb2818b45bf9b6e66175cefd80b6e4a1ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:10 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:10.585848075Z" level=info msg="ignoring event" container=8a0796a6fd139e34146729f05330e8554afd338b598fd53c135d700704cea580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:30:16 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:30:16.809464495Z" level=info msg="ignoring event" container=3902ec2c22c138271b7c612de2b2ec28e9b3e2406519c1a03ab3d1e1760a1146 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:36:28 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:36:28.322247254Z" level=info msg="ignoring event" container=1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:36:28 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:36:28.464407227Z" level=info msg="ignoring event" container=1d5dec60ab67acd84e750360030eddc13a9150ac9c006977978cdb19a2e6156b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:59 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:37:59.980163682Z" level=info msg="ignoring event" container=fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:38:00 ubuntu-20-agent-2 dockerd[14947]: time="2024-09-16T10:38:00.122483210Z" level=info msg="ignoring event" container=4cc0471023071a3d36728e0fb6850e3fa91bc3294992e3a0df5a4b8dce1d050a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b806437d39cb5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 19 minutes ago      Running             gcp-auth                                 0                   872b837fda1bc       gcp-auth-89d5ffd79-wt6q9
	6b6303f81cb52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          19 minutes ago      Running             csi-snapshotter                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d549f78521f57       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          19 minutes ago      Running             csi-provisioner                          0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	9125db73d99e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            19 minutes ago      Running             liveness-probe                           0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	87c37483d2112       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           19 minutes ago      Running             hostpath                                 0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	cd42401f74b1d       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         19 minutes ago      Running             admission                                0                   d5cc1eab65661       volcano-admission-77d7d48b68-t975d
	0c0ddb709904f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                19 minutes ago      Running             node-driver-registrar                    0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	b0782903176d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              19 minutes ago      Running             csi-resizer                              0                   fb9dfe220b3dc       csi-hostpath-resizer-0
	4edaa9f0351e1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             19 minutes ago      Running             csi-attacher                             0                   fa27205224e9f       csi-hostpath-attacher-0
	f0ce5f8efdc2b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   19 minutes ago      Running             csi-external-health-monitor-controller   0                   f19e06ccc7dbc       csi-hostpathplugin-x6gtw
	d35f343c48bcb       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               19 minutes ago      Running             volcano-scheduler                        0                   ca6d7d9980376       volcano-scheduler-576bc46687-l88qd
	3fa7892ed6588       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      20 minutes ago      Running             volcano-controllers                      0                   1d8c71b5408cc       volcano-controllers-56675bb4d5-kd2r2
	23bdeff0c7c03       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         20 minutes ago      Exited              main                                     0                   2684a290edfd1       volcano-admission-init-4rd4m
	a7c6ba8b5b8e1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      20 minutes ago      Running             volume-snapshot-controller               0                   2a9eff5290337       snapshot-controller-56fcc65765-c729p
	59e2e493c17f7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      20 minutes ago      Running             volume-snapshot-controller               0                   a62d801d6adc1       snapshot-controller-56fcc65765-hhv7d
	c5ee33602669d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       20 minutes ago      Running             local-path-provisioner                   0                   6fcb08908435e       local-path-provisioner-86d989889c-xpx7m
	c2bb3772d49b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        20 minutes ago      Running             yakd                                     0                   54361ea6661c2       yakd-dashboard-67d98fc6b-ggfmd
	566744d15c91f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               20 minutes ago      Running             cloud-spanner-emulator                   0                   2ce78388a8512       cloud-spanner-emulator-769b77f747-7x6cj
	1cb6e9270416d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     20 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6c5f84705a086       nvidia-device-plugin-daemonset-dcrh9
	e19218997c830       6e38f40d628db                                                                                                                                20 minutes ago      Running             storage-provisioner                      0                   debc24e02ca98       storage-provisioner
	e0a1b4e718aed       c69fa2e9cbf5f                                                                                                                                20 minutes ago      Running             coredns                                  0                   44104ce9decd6       coredns-7c65d6cfc9-vlmkz
	95dfe8f64bc6f       60c005f310ff3                                                                                                                                20 minutes ago      Running             kube-proxy                               0                   3eddba63436f7       kube-proxy-gm7kv
	236092569fa7f       2e96e5913fc06                                                                                                                                20 minutes ago      Running             etcd                                     0                   f4c192de28c8e       etcd-ubuntu-20-agent-2
	f656d4b3e221b       6bab7719df100                                                                                                                                20 minutes ago      Running             kube-apiserver                           0                   13c6d1481d7e3       kube-apiserver-ubuntu-20-agent-2
	abadc50dd44f1       175ffd71cce3d                                                                                                                                20 minutes ago      Running             kube-controller-manager                  0                   2dd1e926360a9       kube-controller-manager-ubuntu-20-agent-2
	0412032e5006c       9aa1fad941575                                                                                                                                20 minutes ago      Running             kube-scheduler                           0                   b7f61176a82d0       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [e0a1b4e718ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59960 - 9097 "HINFO IN 5932384522844147917.1993008146596938559. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018267326s
	[INFO] 10.244.0.24:39221 - 38983 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000387765s
	[INFO] 10.244.0.24:57453 - 43799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000481367s
	[INFO] 10.244.0.24:56558 - 1121 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126982s
	[INFO] 10.244.0.24:37367 - 64790 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137381s
	[INFO] 10.244.0.24:53874 - 61210 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129517s
	[INFO] 10.244.0.24:35488 - 47376 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167054s
	[INFO] 10.244.0.24:39756 - 34231 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003382584s
	[INFO] 10.244.0.24:42692 - 8269 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003496461s
	[INFO] 10.244.0.24:40495 - 49254 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00344128s
	[INFO] 10.244.0.24:54381 - 40672 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003513746s
	[INFO] 10.244.0.24:45458 - 51280 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002837809s
	[INFO] 10.244.0.24:39080 - 48381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003158709s
	[INFO] 10.244.0.24:49164 - 30651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00123377s
	[INFO] 10.244.0.24:33687 - 1000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001779254s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:43:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:45 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:45 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:45 +0000   Mon, 16 Sep 2024 10:23:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:45 +0000   Mon, 16 Sep 2024 10:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7x6cj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  gcp-auth                    gcp-auth-89d5ffd79-wt6q9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-vlmkz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 csi-hostpathplugin-x6gtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-gm7kv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 nvidia-device-plugin-daemonset-dcrh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 snapshot-controller-56fcc65765-c729p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 snapshot-controller-56fcc65765-hhv7d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  local-path-storage          local-path-provisioner-86d989889c-xpx7m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  volcano-system              volcano-admission-77d7d48b68-t975d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  volcano-system              volcano-controllers-56675bb4d5-kd2r2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  volcano-system              volcano-scheduler-576bc46687-l88qd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggfmd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             298Mi (0%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x6 over 20m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 4f 68 84 7c 26 08 06
	[  +0.029810] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 4a d1 e3 09 35 08 06
	[  +2.541456] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 35 1c 77 2c 6a 08 06
	[Sep16 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 2e 0e e0 53 6a 08 06
	[  +1.979621] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	
	
	==> etcd [236092569fa7] <==
	{"level":"info","ts":"2024-09-16T10:23:22.170188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.170298Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:22.171038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:22.171804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:23:22.172233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:23:34.396500Z","caller":"traceutil/trace.go:171","msg":"trace[1443924902] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"122.443714ms","start":"2024-09-16T10:23:34.274027Z","end":"2024-09-16T10:23:34.396470Z","steps":["trace[1443924902] 'process raft request'  (duration: 42.860188ms)","trace[1443924902] 'compare'  (duration: 79.401186ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:23:34.396568Z","caller":"traceutil/trace.go:171","msg":"trace[1914523289] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"119.254337ms","start":"2024-09-16T10:23:34.277291Z","end":"2024-09-16T10:23:34.396545Z","steps":["trace[1914523289] 'process raft request'  (duration: 119.164267ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396664Z","caller":"traceutil/trace.go:171","msg":"trace[551861205] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"121.694141ms","start":"2024-09-16T10:23:34.274951Z","end":"2024-09-16T10:23:34.396645Z","steps":["trace[551861205] 'process raft request'  (duration: 121.454274ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396765Z","caller":"traceutil/trace.go:171","msg":"trace[612276300] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"117.724007ms","start":"2024-09-16T10:23:34.279030Z","end":"2024-09-16T10:23:34.396754Z","steps":["trace[612276300] 'process raft request'  (duration: 117.466969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396775Z","caller":"traceutil/trace.go:171","msg":"trace[485760124] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"107.084096ms","start":"2024-09-16T10:23:34.289681Z","end":"2024-09-16T10:23:34.396765Z","steps":["trace[485760124] 'process raft request'  (duration: 106.857041ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:34.396851Z","caller":"traceutil/trace.go:171","msg":"trace[655456638] linearizableReadLoop","detail":"{readStateIndex:770; appliedIndex:767; }","duration":"117.963693ms","start":"2024-09-16T10:23:34.278878Z","end":"2024-09-16T10:23:34.396842Z","steps":["trace[655456638] 'read index received'  (duration: 5.820633ms)","trace[655456638] 'applied index is now lower than readState.Index'  (duration: 112.141241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:23:34.396925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.026308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:23:34.396979Z","caller":"traceutil/trace.go:171","msg":"trace[1000991150] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate; range_end:; response_count:0; response_revision:752; }","duration":"118.092731ms","start":"2024-09-16T10:23:34.278875Z","end":"2024-09-16T10:23:34.396968Z","steps":["trace[1000991150] 'agreement among raft nodes before linearized reading'  (duration: 118.006643ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:23:38.471576Z","caller":"traceutil/trace.go:171","msg":"trace[1536302833] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"154.211147ms","start":"2024-09-16T10:23:38.317339Z","end":"2024-09-16T10:23:38.471550Z","steps":["trace[1536302833] 'process raft request'  (duration: 154.053853ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:22.188338Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1554}
	{"level":"info","ts":"2024-09-16T10:33:22.212714Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1554,"took":"23.934179ms","hash":4226216058,"current-db-size-bytes":7352320,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":3911680,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-16T10:33:22.212758Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226216058,"revision":1554,"compact-revision":-1}
	{"level":"info","ts":"2024-09-16T10:38:22.193136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1970}
	{"level":"info","ts":"2024-09-16T10:38:22.209905Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1970,"took":"16.323073ms","hash":247125003,"current-db-size-bytes":7352320,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":2813952,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-16T10:38:22.209958Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":247125003,"revision":1970,"compact-revision":1554}
	{"level":"info","ts":"2024-09-16T10:43:22.197324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2024-09-16T10:43:22.214096Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2368,"took":"16.301451ms","hash":1353663712,"current-db-size-bytes":7352320,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":2527232,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-09-16T10:43:22.214135Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1353663712,"revision":2368,"compact-revision":1970}
	
	
	==> gcp-auth [b806437d39cb] <==
	2024/09/16 10:24:56 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:44:01 up 26 min,  0 users,  load average: 0.12, 0.14, 0.16
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f656d4b3e221] <==
	W0916 10:24:04.623446       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:05.663512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:06.687369       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:07.741783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:08.796077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:09.892806       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.278243       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.278280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.279887       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.290102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:10.290145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:10.291730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:10.911493       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:11.942936       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:13.040622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:14.059340       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.67.140:443: connect: connection refused
	W0916 10:24:20.272187       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:20.272230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.287211       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.287254       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	W0916 10:24:42.296283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.162.126:443: connect: connection refused
	E0916 10:24:42.296314       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.162.126:443: connect: connection refused" logger="UnhandledError"
	I0916 10:30:16.763857       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:30:17.782395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:36:44.202861       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [abadc50dd44f] <==
	W0916 10:36:03.761315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:03.761365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:36:27.239533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.183µs"
	W0916 10:36:38.611788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:38.611834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:10.128172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:10.128213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:42.411738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:42.411793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:59.945055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="32.509µs"
	W0916 10:38:14.038211       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:38:14.038251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:39:11.867226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:39:11.867269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:40:03.630104       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:40:03.630148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:40:45.470058       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	W0916 10:41:00.693938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:41:00.693979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:41:47.275869       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:41:47.275910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:42:46.220514       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:42:46.220558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:43:43.082394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:43:43.082438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [95dfe8f64bc6] <==
	I0916 10:23:31.205838       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:31.406402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:23:31.406455       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:23:31.489030       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:23:31.489102       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:23:31.508985       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:23:31.509483       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:23:31.509513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:23:31.539926       1 config.go:199] "Starting service config controller"
	I0916 10:23:31.540054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:23:31.559259       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:23:31.559278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:23:31.559824       1 config.go:328] "Starting node config controller"
	I0916 10:23:31.559836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:23:31.641834       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:23:31.660551       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:23:31.660598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0412032e5006] <==
	W0916 10:23:23.040568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:23.040650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.040660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:23.040674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:23:23.040716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.040636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.040756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.848417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:23.848457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.947205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:23.947244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:23.963782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:23.963827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.018222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:23:24.018276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.056374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:24.056418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.187965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:24.188004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:24.200436       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:23:24.200484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:23:24.239846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:23:24.239894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:27.139487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:44:01 UTC. --
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.059883   16162 scope.go:117] "RemoveContainer" containerID="f63dc6bb021d4ce6cbee3075c29258d7331bf514af6829856a10baf0281d447f"
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152877   16162 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-modules\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152906   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bdbd4\" (UniqueName: \"kubernetes.io/projected/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-kube-api-access-bdbd4\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152918   16162 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-cgroup\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.152930   16162 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c0a97873-e0c3-41a1-af0b-2ece8d95b20a-bpffs\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:30:17 ubuntu-20-agent-2 kubelet[16162]: I0916 10:30:17.391044   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a97873-e0c3-41a1-af0b-2ece8d95b20a" path="/var/lib/kubelet/pods/c0a97873-e0c3-41a1-af0b-2ece8d95b20a/volumes"
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.624852   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd555\" (UniqueName: \"kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555\") pod \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\" (UID: \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\") "
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.624912   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir\") pod \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\" (UID: \"1d335baf-98ff-41fd-9b89-ddd333da0dc4\") "
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.625177   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "1d335baf-98ff-41fd-9b89-ddd333da0dc4" (UID: "1d335baf-98ff-41fd-9b89-ddd333da0dc4"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.626978   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555" (OuterVolumeSpecName: "kube-api-access-jd555") pod "1d335baf-98ff-41fd-9b89-ddd333da0dc4" (UID: "1d335baf-98ff-41fd-9b89-ddd333da0dc4"). InnerVolumeSpecName "kube-api-access-jd555". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.725323   16162 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1d335baf-98ff-41fd-9b89-ddd333da0dc4-tmp-dir\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:36:28 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:28.725365   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jd555\" (UniqueName: \"kubernetes.io/projected/1d335baf-98ff-41fd-9b89-ddd333da0dc4-kube-api-access-jd555\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.333823   16162 scope.go:117] "RemoveContainer" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.350814   16162 scope.go:117] "RemoveContainer" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: E0916 10:36:29.351844   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda" containerID="1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.351896   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"} err="failed to get container status \"1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1c9f6a3099faf7cbc38f3256c953fd215441f091b07a121d736f152b0cf41eda"
	Sep 16 10:36:29 ubuntu-20-agent-2 kubelet[16162]: I0916 10:36:29.389305   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d335baf-98ff-41fd-9b89-ddd333da0dc4" path="/var/lib/kubelet/pods/1d335baf-98ff-41fd-9b89-ddd333da0dc4/volumes"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.341360   16162 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlk7w\" (UniqueName: \"kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w\") pod \"456f019d-09af-4e09-9db8-cda9eda20ea3\" (UID: \"456f019d-09af-4e09-9db8-cda9eda20ea3\") "
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.343712   16162 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w" (OuterVolumeSpecName: "kube-api-access-nlk7w") pod "456f019d-09af-4e09-9db8-cda9eda20ea3" (UID: "456f019d-09af-4e09-9db8-cda9eda20ea3"). InnerVolumeSpecName "kube-api-access-nlk7w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.373318   16162 scope.go:117] "RemoveContainer" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.392731   16162 scope.go:117] "RemoveContainer" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: E0916 10:38:00.393535   16162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602" containerID="fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.393576   16162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"} err="failed to get container status \"fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602\": rpc error: code = Unknown desc = Error response from daemon: No such container: fe6d1bd912755083a936f733c2acf73b4f7788af0654bc6a656ad63567a49602"
	Sep 16 10:38:00 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:00.441998   16162 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nlk7w\" (UniqueName: \"kubernetes.io/projected/456f019d-09af-4e09-9db8-cda9eda20ea3-kube-api-access-nlk7w\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 16 10:38:01 ubuntu-20-agent-2 kubelet[16162]: I0916 10:38:01.391092   16162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456f019d-09af-4e09-9db8-cda9eda20ea3" path="/var/lib/kubelet/pods/456f019d-09af-4e09-9db8-cda9eda20ea3/volumes"
	
	
	==> storage-provisioner [e19218997c83] <==
	I0916 10:23:33.807788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:23:33.819755       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:23:33.821506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:23:33.836239       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:23:33.837177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	I0916 10:23:33.840556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"272307eb-dbc1-400e-a5a3-6595c2b694d1", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407 became leader
	I0916 10:23:33.937802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_b43bad39-07cb-4897-bb1d-f1492a783407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (421.123µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/CSI (361.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: fork/exec /usr/local/bin/kubectl: exec format error (608.916µs)
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:687: expected current-context = "minikube", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/serial/KubeContext FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubeContext]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/serial/KubeContext logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|         | helm-tiller --alsologtostderr        |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | enable headlamp -p minikube          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | headlamp --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | -p minikube                          |          |         |         |                     |                     |
	| addons  | minikube addons disable yakd         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| stop    | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable gvisor -p minikube           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start   | -p minikube --memory=2048            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|         | --cert-expiration=3m                 |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| start   | -p minikube --memory=2048            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|         | --cert-expiration=8760h              |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start   | -p minikube --memory=4000            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|         | --apiserver-port=8441                |          |         |         |                     |                     |
	|         | --wait=all --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | -v=8                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:49:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:49:01.151961   40910 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:01.152095   40910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:01.152107   40910 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:01.152112   40910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:01.152289   40910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:49:01.152830   40910 out.go:352] Setting JSON to false
	I0916 10:49:01.154034   40910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1892,"bootTime":1726481849,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:49:01.154131   40910 start.go:139] virtualization: kvm guest
	I0916 10:49:01.156584   40910 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:49:01.158407   40910 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:49:01.158430   40910 notify.go:220] Checking for updates...
	W0916 10:49:01.158432   40910 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:49:01.160643   40910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:49:01.161920   40910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:01.163203   40910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:49:01.164512   40910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:49:01.165743   40910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:49:01.167548   40910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:49:01.167660   40910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:49:01.168150   40910 exec_runner.go:51] Run: systemctl --version
	I0916 10:49:01.181781   40910 out.go:177] * Using the none driver based on existing profile
	I0916 10:49:01.183300   40910 start.go:297] selected driver: none
	I0916 10:49:01.183319   40910 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:01.183453   40910 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:49:01.183502   40910 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:49:01.185287   40910 cni.go:84] Creating CNI manager for ""
	I0916 10:49:01.185375   40910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:01.185448   40910 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:01.187093   40910 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:49:01.188500   40910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:49:01.188773   40910 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:49:01.188890   40910 start.go:364] duration metric: took 76.273µs to acquireMachinesLock for "minikube"
	I0916 10:49:01.188913   40910 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:01.188925   40910 fix.go:54] fixHost starting: 
	I0916 10:49:01.189892   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:01.189915   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:01.189961   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:01.209135   40910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/39915/cgroup
	I0916 10:49:01.220119   40910 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/a84496f2946e5428a577f4d4bdcfe2c49204cca7acad6168eb47dea051942fe4"
	I0916 10:49:01.220183   40910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/a84496f2946e5428a577f4d4bdcfe2c49204cca7acad6168eb47dea051942fe4/freezer.state
	I0916 10:49:01.228949   40910 api_server.go:204] freezer state: "THAWED"
	I0916 10:49:01.228996   40910 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:01.232514   40910 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:01.232545   40910 fix.go:112] recreateIfNeeded on minikube: state=Running err=<nil>
	W0916 10:49:01.232554   40910 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:01.234457   40910 out.go:177] * Updating the running none "minikube" bare metal machine ...
	I0916 10:49:01.235710   40910 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:49:01.235759   40910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:01.235801   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:01.248549   40910 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:49:01.248572   40910 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:49:01.248580   40910 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:49:01.250269   40910 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:49:01.251512   40910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:49:01.251582   40910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:49:01.251665   40910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:49:01.251676   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> /etc/ssl/certs/110572.pem
	I0916 10:49:01.251744   40910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> hosts in /etc/test/nested/copy/11057
	I0916 10:49:01.251751   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.251794   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11057
	I0916 10:49:01.259798   40910 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:49:01.259817   40910 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:49:01.259849   40910 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:49:01.269468   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:01.269641   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube732688499 /etc/ssl/certs/110572.pem
	I0916 10:49:01.277772   40910 exec_runner.go:144] found /etc/test/nested/copy/11057/hosts, removing ...
	I0916 10:49:01.277791   40910 exec_runner.go:203] rm: /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.277819   40910 exec_runner.go:51] Run: sudo rm -f /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.285197   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts --> /etc/test/nested/copy/11057/hosts (40 bytes)
	I0916 10:49:01.285316   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube465686449 /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.293938   40910 start.go:296] duration metric: took 58.209304ms for postStartSetup
	I0916 10:49:01.293964   40910 fix.go:56] duration metric: took 105.040267ms for fixHost
	I0916 10:49:01.293973   40910 start.go:83] releasing machines lock for "minikube", held for 105.068271ms
	I0916 10:49:01.294137   40910 interface.go:432] Looking for default routes with IPv4 addresses
	I0916 10:49:01.294148   40910 interface.go:437] Default route transits interface "ens4"
	I0916 10:49:01.294329   40910 interface.go:209] Interface ens4 is up
	I0916 10:49:01.294389   40910 interface.go:257] Interface "ens4" has 2 addresses :[10.138.0.48/32 fe80::4001:aff:fe8a:30/64].
	I0916 10:49:01.294426   40910 interface.go:224] Checking addr  10.138.0.48/32.
	I0916 10:49:01.294439   40910 interface.go:231] IP found 10.138.0.48
	I0916 10:49:01.294453   40910 interface.go:263] Found valid IPv4 address 10.138.0.48 for interface "ens4".
	I0916 10:49:01.294464   40910 interface.go:443] Found active IP 10.138.0.48 
	I0916 10:49:01.294551   40910 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:49:01.294609   40910 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:49:01.296373   40910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:01.296419   40910 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:01.304778   40910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:01.304804   40910 start.go:495] detecting cgroup driver to use...
	I0916 10:49:01.304834   40910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:01.304933   40910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:01.321651   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:49:01.330364   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:49:01.338939   40910 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:49:01.339015   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:49:01.347758   40910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:01.356238   40910 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:49:01.365789   40910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:01.375456   40910 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:01.383147   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:49:01.392828   40910 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:49:01.401464   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:49:01.409759   40910 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:01.416630   40910 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:01.423420   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:01.671116   40910 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:49:01.835580   40910 start.go:495] detecting cgroup driver to use...
	I0916 10:49:01.835628   40910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:01.835789   40910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:01.855770   40910 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:49:01.856677   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:49:01.865411   40910 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:49:01.865433   40910 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.865469   40910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.873087   40910 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:49:01.873214   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2891534564 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.880726   40910 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:49:02.118649   40910 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:49:02.355022   40910 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:49:02.355171   40910 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:49:02.355186   40910 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:49:02.355227   40910 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:49:02.364314   40910 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:49:02.364450   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2214450792 /etc/docker/daemon.json
	I0916 10:49:02.372419   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:02.612083   40910 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:49:13.102098   40910 exec_runner.go:84] Completed: sudo systemctl restart docker: (10.489964604s)
	I0916 10:49:13.102166   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:49:13.117386   40910 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:49:13.150852   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:13.163093   40910 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:49:13.380641   40910 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:49:13.597912   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:13.823840   40910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:49:13.841381   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:13.854143   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:14.070802   40910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:49:14.139874   40910 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:49:14.139951   40910 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:49:14.141296   40910 start.go:563] Will wait 60s for crictl version
	I0916 10:49:14.141344   40910 exec_runner.go:51] Run: which crictl
	I0916 10:49:14.142223   40910 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:49:14.171538   40910 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:49:14.171592   40910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:14.194015   40910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:14.216117   40910 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:49:14.216210   40910 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:14.218934   40910 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:49:14.220124   40910 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:14.220241   40910 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:49:14.220272   40910 kubeadm.go:934] updating node { 10.138.0.48 8441 v1.31.1 docker true true} ...
	I0916 10:49:14.220365   40910 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:49:14.220420   40910 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:49:14.268128   40910 cni.go:84] Creating CNI manager for ""
	I0916 10:49:14.268156   40910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:14.268166   40910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:14.268187   40910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:14.268353   40910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:14.268417   40910 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:14.277293   40910 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:14.277344   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:49:14.285281   40910 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:49:14.285302   40910 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.285345   40910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.292474   40910 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:49:14.292596   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3131872588 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.299866   40910 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:49:14.299897   40910 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:49:14.299936   40910 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:49:14.307476   40910 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:14.307590   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3779848822 /lib/systemd/system/kubelet.service
	I0916 10:49:14.315642   40910 exec_runner.go:144] found /var/tmp/minikube/kubeadm.yaml.new, removing ...
	I0916 10:49:14.315659   40910 exec_runner.go:203] rm: /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.315686   40910 exec_runner.go:51] Run: sudo rm -f /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.322574   40910 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:49:14.322721   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1661355805 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.331244   40910 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:14.332560   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:14.544862   40910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:14.556727   40910 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:49:14.556749   40910 certs.go:194] generating shared ca certs ...
	I0916 10:49:14.556768   40910 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.556918   40910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:49:14.556972   40910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:49:14.556986   40910 certs.go:256] generating profile certs ...
	I0916 10:49:14.557130   40910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:49:14.557208   40910 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:49:14.557258   40910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:49:14.557271   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.557288   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:14.557305   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.557325   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.557341   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.557361   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.557378   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.557396   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.557464   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem (1338 bytes)
	W0916 10:49:14.557505   40910 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:14.557518   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:49:14.557553   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:49:14.557586   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:14.557620   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:14.557675   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:14.557723   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.557744   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.557762   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem -> /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.558297   40910 exec_runner.go:144] found /var/lib/minikube/certs/ca.crt, removing ...
	I0916 10:49:14.558311   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.558352   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.566572   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:14.566718   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1119541196 /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.575377   40910 exec_runner.go:144] found /var/lib/minikube/certs/ca.key, removing ...
	I0916 10:49:14.575398   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.key
	I0916 10:49:14.575438   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.key
	I0916 10:49:14.582900   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:49:14.583058   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3054346620 /var/lib/minikube/certs/ca.key
	I0916 10:49:14.591208   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.crt, removing ...
	I0916 10:49:14.591229   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.591261   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.598888   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:14.599014   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube717466652 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.607239   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.key, removing ...
	I0916 10:49:14.607262   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.607305   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.614584   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:14.614726   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2863523917 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.622448   40910 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.crt, removing ...
	I0916 10:49:14.622465   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.622499   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.629432   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:49:14.629559   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2028355538 /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.637572   40910 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.key, removing ...
	I0916 10:49:14.637591   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.637619   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.644355   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:14.644484   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2352688056 /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.652620   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.crt, removing ...
	I0916 10:49:14.652636   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.652676   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.659675   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:14.659789   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3054953620 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.667727   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.key, removing ...
	I0916 10:49:14.667743   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.667769   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.675532   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:49:14.675648   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube447743794 /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.683024   40910 exec_runner.go:144] found /usr/share/ca-certificates/110572.pem, removing ...
	I0916 10:49:14.683043   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.683069   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.690871   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /usr/share/ca-certificates/110572.pem (1708 bytes)
	I0916 10:49:14.691061   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube963407501 /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.698372   40910 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:49:14.698390   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.698421   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.705324   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:14.705446   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1523262685 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.712576   40910 exec_runner.go:144] found /usr/share/ca-certificates/11057.pem, removing ...
	I0916 10:49:14.712591   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.712619   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.720442   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem --> /usr/share/ca-certificates/11057.pem (1338 bytes)
	I0916 10:49:14.720558   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube233605773 /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.727822   40910 exec_runner.go:144] found /var/lib/minikube/kubeconfig, removing ...
	I0916 10:49:14.727837   40910 exec_runner.go:203] rm: /var/lib/minikube/kubeconfig
	I0916 10:49:14.727863   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/kubeconfig
	I0916 10:49:14.735069   40910 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:14.735193   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3783177391 /var/lib/minikube/kubeconfig
	I0916 10:49:14.742149   40910 exec_runner.go:51] Run: openssl version
	I0916 10:49:14.744789   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:14.753163   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.754466   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.754501   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.757166   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:14.765673   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11057.pem && ln -fs /usr/share/ca-certificates/11057.pem /etc/ssl/certs/11057.pem"
	I0916 10:49:14.783913   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.785237   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1338 Sep 16 10:49 /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.785283   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.788160   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11057.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:14.796603   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110572.pem && ln -fs /usr/share/ca-certificates/110572.pem /etc/ssl/certs/110572.pem"
	I0916 10:49:14.804481   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.805685   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1708 Sep 16 10:49 /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.805770   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.808472   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110572.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:14.815668   40910 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:14.816908   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:14.819662   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:14.822313   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:14.824912   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:14.827464   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:14.830057   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:14.832590   40910 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:14.832711   40910 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:14.848598   40910 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:49:14.856680   40910 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:49:14.856696   40910 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:49:14.856734   40910 exec_runner.go:51] Run: sudo test -d /data/minikube
	I0916 10:49:14.863756   40910 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.864097   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.864491   40910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:14.864741   40910 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:49:14.865225   40910 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:49:14.865426   40910 exec_runner.go:51] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.872764   40910 kubeadm.go:630] The running cluster does not require reconfiguration: 10.138.0.48
	I0916 10:49:14.872792   40910 kubeadm.go:597] duration metric: took 16.091162ms to restartPrimaryControlPlane
	I0916 10:49:14.872800   40910 kubeadm.go:394] duration metric: took 40.215274ms to StartCluster
	I0916 10:49:14.872816   40910 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.872873   40910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:14.873412   40910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.873745   40910 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:49:14.873830   40910 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:49:14.873846   40910 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:49:14.873849   40910 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:49:14.873876   40910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:49:14.873913   40910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 10:49:14.873854   40910 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:49:14.873991   40910 host.go:66] Checking if "minikube" exists ...
	I0916 10:49:14.874348   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.874364   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:14.874392   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:14.874445   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.874458   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:14.874478   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:14.876658   40910 out.go:177] * Configuring local host environment ...
	W0916 10:49:14.878282   40910 out.go:270] * 
	W0916 10:49:14.878299   40910 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:49:14.878305   40910 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:49:14.878310   40910 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:49:14.878319   40910 out.go:270] * 
	W0916 10:49:14.878357   40910 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:49:14.878367   40910 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:49:14.878373   40910 out.go:270] * 
	W0916 10:49:14.878400   40910 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:49:14.878413   40910 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:49:14.878418   40910 out.go:270] * 
	W0916 10:49:14.878422   40910 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:49:14.878447   40910 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:49:14.879746   40910 out.go:177] * Verifying Kubernetes components...
	I0916 10:49:14.881383   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	W0916 10:49:14.891622   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.891682   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	W0916 10:49:14.892731   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.892785   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:15.116602   40910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:15.122178   40910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:15.122498   40910 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:49:15.122771   40910 addons.go:234] Setting addon default-storageclass=true in "minikube"
	W0916 10:49:15.122789   40910 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:49:15.122816   40910 host.go:66] Checking if "minikube" exists ...
	I0916 10:49:15.123334   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:15.123351   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:15.123382   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:15.124266   40910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:49:15.125971   40910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.125996   40910 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:49:15.126002   40910 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.126029   40910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.128867   40910 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:49:15.128987   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:15.128997   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:15.129009   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:15.129015   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:15.129222   40910 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:49:15.129236   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:15.133877   40910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:49:15.134031   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2331610739 /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 10:49:15.139092   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:15.139135   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:15.142164   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.148974   40910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.148997   40910 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:49:15.149003   40910 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.149044   40910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.156633   40910 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:49:15.156903   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4183224578 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.167362   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0916 10:49:15.224893   40910 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.224933   40910 retry.go:31] will retry after 338.203366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:49:15.257912   40910 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.257952   40910 retry.go:31] will retry after 323.835935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.563337   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.585866   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.631299   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:15.631323   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:15.631331   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:15.631335   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:15.631599   40910 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:49:15.631623   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:16.129460   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:16.129488   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:16.129500   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:16.129505   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.840845   40910 round_trippers.go:574] Response Status: 200 OK in 1711 milliseconds
	I0916 10:49:17.840871   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.840882   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.840887   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.840894   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.840898   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.840902   40910 round_trippers.go:580]     Audit-Id: 0c979e5c-932a-459f-ab9c-9cd0ae9b5400
	I0916 10:49:17.840906   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.841053   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:17.842042   40910 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:49:17.842064   40910 node_ready.go:38] duration metric: took 2.713160156s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:49:17.842077   40910 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:17.842153   40910 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:49:17.842166   40910 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:49:17.842232   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:17.842239   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.842249   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.842255   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.849622   40910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:49:17.849648   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.849658   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.849665   40910 round_trippers.go:580]     Audit-Id: 18da7c90-93f3-4739-be80-a1dbd645cd92
	I0916 10:49:17.849669   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.849672   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.849677   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.849680   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.850491   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"393"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"365","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51819 chars]
	I0916 10:49:17.854842   40910 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:17.854923   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:17.854935   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.854945   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.854950   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.856679   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:17.856694   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.856701   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.856704   40910 round_trippers.go:580]     Audit-Id: aedb8f86-8d36-4b53-9f18-beaaa7217748
	I0916 10:49:17.856709   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.856713   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.856717   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.856721   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.856848   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"365","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6725 chars]
	I0916 10:49:17.857363   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:17.857379   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.857387   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.857390   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.862681   40910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:49:17.862697   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.862706   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.862714   40910 round_trippers.go:580]     Audit-Id: 5693cdba-39e6-4bc8-adc4-8bf7c8200ae9
	I0916 10:49:17.862719   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.862723   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.862727   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.862732   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.863186   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:17.922804   40910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.336891591s)
	I0916 10:49:17.922941   40910 round_trippers.go:463] GET https://10.138.0.48:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:49:17.922953   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.922965   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.922977   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.930707   40910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:49:17.930728   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.930737   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.930743   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.930748   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:17.930758   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:17.930763   40910 round_trippers.go:580]     Content-Length: 1273
	I0916 10:49:17.930770   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.930774   40910 round_trippers.go:580]     Audit-Id: 9e077ef1-e7db-4bed-bcd1-b27a8d302926
	I0916 10:49:17.930837   40910 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:49:17.931396   40910 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:49:17.931460   40910 round_trippers.go:463] PUT https://10.138.0.48:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:49:17.931468   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.931478   40910 round_trippers.go:473]     Content-Type: application/json
	I0916 10:49:17.931483   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.931487   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.035155   40910 round_trippers.go:574] Response Status: 200 OK in 103 milliseconds
	I0916 10:49:18.035193   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.035203   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.035208   40910 round_trippers.go:580]     Audit-Id: 6faa9ba8-e9c3-4c46-82a8-79a43344462f
	I0916 10:49:18.035212   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.035217   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.035220   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.035226   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.035229   40910 round_trippers.go:580]     Content-Length: 1220
	I0916 10:49:18.035442   40910 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:49:18.343579   40910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.78019227s)
	I0916 10:49:18.345676   40910 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:49:18.347010   40910 addons.go:510] duration metric: took 3.473261973s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:49:18.355794   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:18.355811   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.355820   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.355824   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.357814   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.357833   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.357843   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.357848   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.357853   40910 round_trippers.go:580]     Audit-Id: cb0a3e06-0913-45d3-8d44-f2a4fcf53152
	I0916 10:49:18.357857   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.357862   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.357867   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.358031   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:18.358534   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:18.358551   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.358559   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.358563   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.360229   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.360334   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.360346   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.360352   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.360355   40910 round_trippers.go:580]     Audit-Id: d9d563f3-9212-4e1f-8158-739186734848
	I0916 10:49:18.360358   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.360362   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.360366   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.360461   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:18.855630   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:18.855657   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.855668   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.855673   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.857346   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.857375   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.857383   40910 round_trippers.go:580]     Audit-Id: 61e10664-2cee-44a3-a164-49906cc3d58a
	I0916 10:49:18.857388   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.857392   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.857396   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.857400   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.857404   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.857495   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:18.858229   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:18.858249   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.858258   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.858269   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.859861   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.859881   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.859891   40910 round_trippers.go:580]     Audit-Id: 3985e165-2fe7-4da9-86fc-86dd41595480
	I0916 10:49:18.859896   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.859899   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.859904   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.859909   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.859914   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.860081   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.355699   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:19.355722   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.355730   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.355734   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.357922   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:19.357944   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.357953   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.357958   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.357963   40910 round_trippers.go:580]     Audit-Id: cec66111-ab6c-4555-9a39-692cca3a9573
	I0916 10:49:19.357968   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.357973   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.357976   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.358146   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:19.358622   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:19.358634   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.358641   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.358646   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.360335   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.360350   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.360357   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.360360   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.360363   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.360366   40910 round_trippers.go:580]     Audit-Id: 38f5fa24-9fca-439f-bb7d-6079d2867123
	I0916 10:49:19.360368   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.360371   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.360535   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.855936   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:19.855963   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.855973   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.855981   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.857909   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.857936   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.857947   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.857955   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.857961   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.857965   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.857969   40910 round_trippers.go:580]     Audit-Id: 7a07f273-e266-4b67-a185-21bc296d6b62
	I0916 10:49:19.857973   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.858117   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:19.858719   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:19.858740   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.858748   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.858751   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.860339   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.860369   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.860379   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.860384   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.860390   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.860394   40910 round_trippers.go:580]     Audit-Id: 090fe24b-bf36-4539-887d-23dafb158106
	I0916 10:49:19.860400   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.860406   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.860548   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.860972   40910 pod_ready.go:103] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:20.355067   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:20.355104   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.355113   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.355118   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.356778   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.356799   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.356809   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.356814   40910 round_trippers.go:580]     Audit-Id: ad3c2817-d8ee-4118-85dd-8a2dae9f77c7
	I0916 10:49:20.356818   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.356826   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.356830   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.356837   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.356925   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0916 10:49:20.357379   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.357392   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.357401   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.357405   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.358988   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.359006   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.359015   40910 round_trippers.go:580]     Audit-Id: 8c8d0dee-bcd9-4703-aae0-edd1d76ed8c5
	I0916 10:49:20.359020   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.359029   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.359034   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.359042   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.359050   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.359185   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:20.359542   40910 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:20.359558   40910 pod_ready.go:82] duration metric: took 2.504692215s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:20.359568   40910 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:20.359635   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:20.359652   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.359662   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.359673   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.361112   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.361130   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.361140   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.361146   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.361151   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.361156   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.361160   40910 round_trippers.go:580]     Audit-Id: 1919f5e2-5c80-4724-b16b-5d74564e1102
	I0916 10:49:20.361168   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.361311   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:20.361652   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.361664   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.361671   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.361676   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.362943   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.362962   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.362972   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.362978   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.362985   40910 round_trippers.go:580]     Audit-Id: 221062e0-12be-4e20-b2bd-9efd140cdd83
	I0916 10:49:20.362993   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.363001   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.363005   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.363119   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:20.859791   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:20.859816   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.859825   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.859829   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.861460   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.861477   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.861484   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.861490   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.861496   40910 round_trippers.go:580]     Audit-Id: 420aad55-3bd6-4e8d-acbf-7c2f06dc4d09
	I0916 10:49:20.861500   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.861504   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.861508   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.861606   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:20.862039   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.862058   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.862070   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.862078   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.863699   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.863712   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.863719   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.863725   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.863730   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.863736   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.863740   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.863744   40910 round_trippers.go:580]     Audit-Id: dec8e4d0-fae1-4712-9bb6-32a7a9d67964
	I0916 10:49:20.863844   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:21.359869   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:21.359889   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.359896   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.359901   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.361802   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.361825   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.361835   40910 round_trippers.go:580]     Audit-Id: 433cf211-2bf2-42df-9c16-45c32524e267
	I0916 10:49:21.361841   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.361846   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.361850   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.361853   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.361857   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.361957   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:21.362473   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:21.362490   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.362500   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.362507   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.363914   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.363934   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.363942   40910 round_trippers.go:580]     Audit-Id: 2c4bd7fa-4b72-49b4-8b4e-e55b95e99270
	I0916 10:49:21.363946   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.363950   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.363954   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.363956   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.363960   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.364096   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:21.859752   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:21.859781   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.859790   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.859796   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.861861   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:21.861878   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.861884   40910 round_trippers.go:580]     Audit-Id: ac2efe12-3aa5-4cc7-8fd9-7cd02097a34b
	I0916 10:49:21.861888   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.861892   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.861896   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.861899   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.861902   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.861992   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:21.862371   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:21.862383   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.862389   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.862393   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.864309   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.864330   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.864339   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.864345   40910 round_trippers.go:580]     Audit-Id: ab304f5a-44a3-4516-8132-bb28d212a0a6
	I0916 10:49:21.864350   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.864353   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.864358   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.864361   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.864452   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:22.359806   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:22.359826   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.359832   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.359836   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.361937   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:22.361957   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.361966   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.361971   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.361974   40910 round_trippers.go:580]     Audit-Id: f74d0130-daa6-443f-af26-3b3946bf48d8
	I0916 10:49:22.361976   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.361978   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.361981   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.362111   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:22.362621   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:22.362637   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.362645   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.362651   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.364360   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:22.364375   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.364381   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.364385   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.364388   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.364392   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.364395   40910 round_trippers.go:580]     Audit-Id: 50dd0252-9c6a-4eea-ab5f-8b9b9ecd38e5
	I0916 10:49:22.364398   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.364565   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:22.364938   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:22.860158   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:22.860178   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.860186   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.860191   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.863070   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:22.863093   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.863102   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.863108   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.863114   40910 round_trippers.go:580]     Audit-Id: 83fd933e-08cf-4ab2-a969-79221445ce39
	I0916 10:49:22.863119   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.863122   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.863125   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.863284   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:22.863711   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:22.863726   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.863735   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.863741   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.865687   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:22.865726   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.865736   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.865742   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.865747   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.865751   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.865755   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.865762   40910 round_trippers.go:580]     Audit-Id: cf9c05b9-a6fd-41e2-9e13-57c810547f6d
	I0916 10:49:22.865896   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:23.360518   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:23.360541   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.360550   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.360554   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.362886   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:23.362903   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.362910   40910 round_trippers.go:580]     Audit-Id: 07a4e463-176f-4454-97ec-0b78b4c7ca05
	I0916 10:49:23.362914   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.362917   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.362920   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.362923   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.362925   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.363051   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:23.363505   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:23.363516   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.363522   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.363525   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.365087   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:23.365100   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.365106   40910 round_trippers.go:580]     Audit-Id: 482db780-b1d8-47f3-97f8-d288082cfe7a
	I0916 10:49:23.365111   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.365116   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.365120   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.365124   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.365128   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.365316   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:23.859916   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:23.859942   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.859950   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.859955   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.862144   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:23.862163   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.862175   40910 round_trippers.go:580]     Audit-Id: ff969368-b1ed-4db1-a30f-c3d13e0f8ef6
	I0916 10:49:23.862183   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.862187   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.862192   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.862196   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.862200   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.862332   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:23.862764   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:23.862777   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.862785   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.862794   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.864625   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:23.864642   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.864650   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.864654   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.864660   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.864665   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.864670   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.864674   40910 round_trippers.go:580]     Audit-Id: 4ffd8ec2-484c-4cfd-bae6-50df2ace71fe
	I0916 10:49:23.864778   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:24.360444   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:24.360466   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.360474   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.360479   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.362638   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:24.362660   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.362667   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.362673   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.362679   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.362686   40910 round_trippers.go:580]     Audit-Id: a98b4072-039d-4500-8e4d-1a25241af7a0
	I0916 10:49:24.362691   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.362694   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.362853   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:24.363275   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:24.363289   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.363295   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.363299   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.365187   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:24.365202   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.365209   40910 round_trippers.go:580]     Audit-Id: c2f1e152-ce36-4a0b-ba2f-935d53a3eac4
	I0916 10:49:24.365214   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.365220   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.365226   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.365230   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.365235   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.365353   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:24.365725   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:24.859970   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:24.860005   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.860013   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.860018   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.862184   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:24.862206   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.862222   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.862229   40910 round_trippers.go:580]     Audit-Id: 45b404e7-5972-43e6-866a-d34f739c24da
	I0916 10:49:24.862233   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.862238   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.862242   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.862247   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.862381   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:24.862887   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:24.862903   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.862913   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.862921   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.864699   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:24.864716   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.864723   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.864728   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.864731   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.864737   40910 round_trippers.go:580]     Audit-Id: abda13ac-f0a9-46e8-9d2c-80b04566dfeb
	I0916 10:49:24.864741   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.864743   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.864882   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:25.360639   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:25.360661   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.360670   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.360674   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.362905   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:25.362929   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.362939   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.362945   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.362950   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.362955   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.362959   40910 round_trippers.go:580]     Audit-Id: e5c458b6-cb9c-44cb-bb96-41ccf94f251b
	I0916 10:49:25.362965   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.363105   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:25.363507   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:25.363520   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.363527   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.363531   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.365376   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:25.365392   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.365398   40910 round_trippers.go:580]     Audit-Id: 2b2f7729-14f1-46f6-ba9b-6e26d7245db4
	I0916 10:49:25.365403   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.365408   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.365414   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.365420   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.365423   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.365510   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:25.860100   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:25.860123   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.860131   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.860135   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.862284   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:25.862304   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.862312   40910 round_trippers.go:580]     Audit-Id: 9f536b90-6dea-429a-aca7-8533db52a1e1
	I0916 10:49:25.862319   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.862326   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.862331   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.862336   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.862341   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.862429   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:25.862830   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:25.862844   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.862853   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.862857   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.864414   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:25.864428   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.864434   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.864437   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.864450   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.864453   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.864456   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.864458   40910 round_trippers.go:580]     Audit-Id: 02b76fec-5096-40b5-9b07-824da3de5d1e
	I0916 10:49:25.864580   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:26.360467   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:26.360488   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.360496   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.360500   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.362759   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:26.362781   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.362790   40910 round_trippers.go:580]     Audit-Id: d564d6a2-7d5a-42d1-b9fa-b76a94d78dfe
	I0916 10:49:26.362795   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.362800   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.362803   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.362807   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.362810   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.362974   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:26.363411   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:26.363427   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.363433   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.363438   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.365243   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:26.365257   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.365266   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.365272   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.365277   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.365280   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.365285   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.365289   40910 round_trippers.go:580]     Audit-Id: ac9c40fe-654d-43f4-9075-5a38c991f929
	I0916 10:49:26.365436   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:26.365842   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:26.860024   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:26.860047   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.860055   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.860057   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.862151   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:26.862174   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.862185   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.862192   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.862198   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.862203   40910 round_trippers.go:580]     Audit-Id: d54ed749-7239-493d-90c4-b5b6f768e14a
	I0916 10:49:26.862207   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.862212   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.862335   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:26.862763   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:26.862776   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.862782   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.862786   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.864518   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:26.864532   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.864539   40910 round_trippers.go:580]     Audit-Id: 3aa86210-1512-451b-af62-bb440f2a5e34
	I0916 10:49:26.864542   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.864546   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.864548   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.864551   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.864554   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.864676   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.360306   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:27.360326   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.360334   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.360338   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.362264   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.362295   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.362305   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.362310   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.362314   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.362318   40910 round_trippers.go:580]     Audit-Id: 6b4d4fe2-ff6b-4ea7-86e0-122bca1d12e8
	I0916 10:49:27.362324   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.362328   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.362426   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:27.362914   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.362933   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.362943   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.362953   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.364573   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.364595   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.364604   40910 round_trippers.go:580]     Audit-Id: 258778c2-bfc9-4fdc-8444-b6649e01e846
	I0916 10:49:27.364609   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.364614   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.364621   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.364624   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.364629   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.364800   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.860464   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:27.860487   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.860495   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.860499   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.862476   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.862495   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.862504   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.862509   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.862513   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.862516   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.862521   40910 round_trippers.go:580]     Audit-Id: f46952ad-37b9-450b-8019-dd8789a2be40
	I0916 10:49:27.862527   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.862644   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"482","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6657 chars]
	I0916 10:49:27.863068   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.863081   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.863087   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.863091   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.864607   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.864625   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.864633   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.864638   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.864642   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.864646   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.864650   40910 round_trippers.go:580]     Audit-Id: f5c0fbe7-4017-48fd-a03b-a1c27aeade11
	I0916 10:49:27.864655   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.864786   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.865215   40910 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:27.865233   40910 pod_ready.go:82] duration metric: took 7.505656859s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:27.865244   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:27.865318   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:27.865328   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.865337   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.865346   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.866792   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.866808   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.866817   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.866822   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.866827   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.866832   40910 round_trippers.go:580]     Audit-Id: 6a226f55-f577-4b15-a525-fee48a3732ca
	I0916 10:49:27.866835   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.866842   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.866962   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:27.867388   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.867401   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.867407   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.867411   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.868713   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.868730   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.868739   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.868743   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.868748   40910 round_trippers.go:580]     Audit-Id: f60c7a2a-f2ee-4138-b1ed-d6f70ddd2fc1
	I0916 10:49:27.868753   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.868757   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.868762   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.868883   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:28.365685   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:28.365724   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.365735   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.365740   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.367832   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:28.367855   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.367865   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.367871   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.367875   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.367878   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.367885   40910 round_trippers.go:580]     Audit-Id: 3519a8ba-89e8-449c-a3ce-81d2d117013a
	I0916 10:49:28.367888   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.368091   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:28.368545   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:28.368560   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.368567   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.368570   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.370123   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:28.370137   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.370144   40910 round_trippers.go:580]     Audit-Id: 81c91e77-451a-4e82-8ad3-17058cb89bfb
	I0916 10:49:28.370148   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.370150   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.370153   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.370155   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.370160   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.370272   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:28.865847   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:28.865874   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.865879   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.865884   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.868017   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:28.868032   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.868037   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.868064   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.868069   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.868073   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.868077   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.868080   40910 round_trippers.go:580]     Audit-Id: 22ec9eaf-934f-40ee-aa27-698b4f811420
	I0916 10:49:28.868226   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:28.868648   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:28.868660   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.868665   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.868669   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.870273   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:28.870292   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.870297   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.870302   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.870304   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.870307   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.870310   40910 round_trippers.go:580]     Audit-Id: 81eeb752-21c3-40bc-9727-aca860a81cdb
	I0916 10:49:28.870315   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.870509   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.366101   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:29.366124   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.366129   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.366134   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.368166   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:29.368185   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.368193   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.368198   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.368202   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.368207   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.368212   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.368216   40910 round_trippers.go:580]     Audit-Id: 9cd3e580-cdfb-423c-bc2f-6ea6d94d900b
	I0916 10:49:29.368357   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:29.368785   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:29.368800   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.368809   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.368816   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.370388   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:29.370409   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.370419   40910 round_trippers.go:580]     Audit-Id: 9bd8cb70-e364-45e1-8286-ea8e9be4eaef
	I0916 10:49:29.370426   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.370435   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.370439   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.370444   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.370448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.370577   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.866238   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:29.866273   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.866284   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.866290   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.868503   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:29.868527   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.868535   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.868541   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.868546   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.868550   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.868557   40910 round_trippers.go:580]     Audit-Id: cbc42e1f-b214-49e1-a3e2-b01bb04dc7fd
	I0916 10:49:29.868562   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.868738   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:29.869186   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:29.869199   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.869206   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.869211   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.870842   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:29.870873   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.870884   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.870891   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.870899   40910 round_trippers.go:580]     Audit-Id: 7898da41-390b-4bc4-bbb1-ceb031be7790
	I0916 10:49:29.870907   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.870913   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.870920   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.871087   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.871450   40910 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:30.365532   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:30.365553   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.365563   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.365569   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.367877   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:30.367900   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.367909   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.367914   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.367918   40910 round_trippers.go:580]     Audit-Id: 709e831c-1b03-4e44-be9e-a492de5f1eb0
	I0916 10:49:30.367922   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.367925   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.367929   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.368417   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:30.369016   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:30.369033   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.369043   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.369057   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.370816   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:30.370831   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.370837   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.370840   40910 round_trippers.go:580]     Audit-Id: caa1d57b-b973-4df7-8a49-fe33023b8323
	I0916 10:49:30.370845   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.370847   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.370850   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.370853   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.370970   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:30.865613   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:30.865643   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.865653   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.865659   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.868081   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:30.868102   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.868112   40910 round_trippers.go:580]     Audit-Id: 5cccd51f-69ec-4732-a082-baf813bc949a
	I0916 10:49:30.868118   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.868121   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.868125   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.868129   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.868134   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.868283   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:30.868749   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:30.868766   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.868776   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.868783   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.870511   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:30.870536   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.870548   40910 round_trippers.go:580]     Audit-Id: 3de4db8a-a9d7-48e5-8743-b3320d951d93
	I0916 10:49:30.870554   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.870559   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.870566   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.870574   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.870579   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.870769   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.366254   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:31.366282   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.366292   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.366297   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.368446   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.368461   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.368467   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.368471   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.368474   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.368477   40910 round_trippers.go:580]     Audit-Id: 2e3cdb46-3866-4292-9b40-16416c47d3db
	I0916 10:49:31.368482   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.368484   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.368626   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"489","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8732 chars]
	I0916 10:49:31.369064   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.369077   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.369083   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.369086   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.370882   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.370897   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.370902   40910 round_trippers.go:580]     Audit-Id: cb46d7f4-7e31-4e5f-af35-de6fc33b39d0
	I0916 10:49:31.370906   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.370912   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.370917   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.370921   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.370927   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.371093   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.371463   40910 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.371482   40910 pod_ready.go:82] duration metric: took 3.506229892s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.371495   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.371548   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2
	I0916 10:49:31.371558   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.371567   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.371572   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.373131   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.373152   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.373161   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.373167   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.373171   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.373176   40910 round_trippers.go:580]     Audit-Id: 8c1b2bc3-4321-46dd-a9e8-a793bd0581e6
	I0916 10:49:31.373180   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.373184   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.373370   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-ubuntu-20-agent-2","namespace":"kube-system","uid":"45d39430-8de5-404d-a2b8-bbf47738a4c7","resourceVersion":"478","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ccbff5351fb3e01bcec8c471c38698f0","kubernetes.io/config.mirror":"ccbff5351fb3e01bcec8c471c38698f0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043157142Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8310 chars]
	I0916 10:49:31.373911   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.373927   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.373936   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.373944   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.375335   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.375348   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.375353   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.375357   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.375361   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.375367   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.375372   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.375380   40910 round_trippers.go:580]     Audit-Id: 490b2e37-3f44-4b3d-b73f-edf84078751f
	I0916 10:49:31.375599   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.375961   40910 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.376016   40910 pod_ready.go:82] duration metric: took 4.501071ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.376032   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.376092   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5
	I0916 10:49:31.376105   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.376116   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.376126   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.377440   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.377451   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.377458   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.377464   40910 round_trippers.go:580]     Audit-Id: fac59a65-6a75-46f0-991d-a4f66597a838
	I0916 10:49:31.377469   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.377475   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.377480   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.377489   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.377594   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lt5f5","generateName":"kube-proxy-","namespace":"kube-system","uid":"2e01c31f-c798-45c0-98a2-ee94c3b9d631","resourceVersion":"400","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4b7ac346-9c76-4a4c-9bfa-9795be9bed9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4b7ac346-9c76-4a4c-9bfa-9795be9bed9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6391 chars]
	I0916 10:49:31.378004   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.378018   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.378024   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.378029   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.379340   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.379357   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.379366   40910 round_trippers.go:580]     Audit-Id: c70261f9-8004-4761-9b84-7c5500180ba3
	I0916 10:49:31.379372   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.379376   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.379382   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.379389   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.379393   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.379540   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.380019   40910 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.380035   40910 pod_ready.go:82] duration metric: took 3.995814ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.380043   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.380091   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2
	I0916 10:49:31.380098   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.380106   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.380111   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.381438   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.381450   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.381458   40910 round_trippers.go:580]     Audit-Id: 5348a0ff-19c6-4754-9776-14f62783efc4
	I0916 10:49:31.381465   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.381473   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.381479   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.381485   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.381489   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.381556   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-ubuntu-20-agent-2","namespace":"kube-system","uid":"a9041542-d7b5-4571-87c5-a6e9e4ecfd5e","resourceVersion":"480","creationTimestamp":"2024-09-16T10:48:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6de72559ec804c46642b9388a6a99321","kubernetes.io/config.mirror":"6de72559ec804c46642b9388a6a99321","kubernetes.io/config.seen":"2024-09-16T10:48:50.455155081Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5192 chars]
	I0916 10:49:31.381932   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.381949   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.381955   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.381962   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.383268   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.383281   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.383287   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.383291   40910 round_trippers.go:580]     Audit-Id: c3ab90c6-6b9c-4fb1-aaf1-51037b21396f
	I0916 10:49:31.383294   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.383297   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.383301   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.383303   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.383501   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.383914   40910 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.383930   40910 pod_ready.go:82] duration metric: took 3.881215ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.383943   40910 pod_ready.go:39] duration metric: took 13.541849297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:31.383965   40910 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:49:31.384035   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:31.402292   40910 api_server.go:72] duration metric: took 16.523814653s to wait for apiserver process to appear ...
	I0916 10:49:31.402310   40910 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:49:31.402332   40910 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:31.405679   40910 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:31.405747   40910 round_trippers.go:463] GET https://10.138.0.48:8441/version
	I0916 10:49:31.405757   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.405765   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.405770   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.406428   40910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:49:31.406442   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.406448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.406452   40910 round_trippers.go:580]     Audit-Id: c52d338f-b459-405a-9a62-36fe356eca72
	I0916 10:49:31.406456   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.406459   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.406462   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.406464   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.406468   40910 round_trippers.go:580]     Content-Length: 263
	I0916 10:49:31.406480   40910 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:49:31.406539   40910 api_server.go:141] control plane version: v1.31.1
	I0916 10:49:31.406552   40910 api_server.go:131] duration metric: took 4.238245ms to wait for apiserver health ...
	I0916 10:49:31.406559   40910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:49:31.406604   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.406611   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.406617   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.406620   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.408753   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.408768   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.408777   40910 round_trippers.go:580]     Audit-Id: ca00f77e-55b6-40d3-942d-caeba2f2b949
	I0916 10:49:31.408783   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.408787   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.408794   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.408800   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.408804   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.409173   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53224 chars]
	I0916 10:49:31.410747   40910 system_pods.go:59] 7 kube-system pods found
	I0916 10:49:31.410769   40910 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:49:31.410774   40910 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:49:31.410777   40910 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [d9fac362-fee0-4ee4-9a06-22b343085d2d] Running
	I0916 10:49:31.410781   40910 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:49:31.410785   40910 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:49:31.410788   40910 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:49:31.410793   40910 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:31.410799   40910 system_pods.go:74] duration metric: took 4.235295ms to wait for pod list to return data ...
	I0916 10:49:31.410806   40910 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:49:31.410859   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:49:31.410869   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.410876   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.410880   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.412925   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.412940   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.412948   40910 round_trippers.go:580]     Audit-Id: c38fc8ed-86c8-4b02-b744-7085955fb70a
	I0916 10:49:31.412955   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.412961   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.412965   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.412969   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.412975   40910 round_trippers.go:580]     Content-Length: 261
	I0916 10:49:31.412980   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.412995   40910 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9d76d48e-93f1-40f0-9e21-ae9ef2c7919a","resourceVersion":"293","creationTimestamp":"2024-09-16T10:48:55Z"}}]}
	I0916 10:49:31.413218   40910 default_sa.go:45] found service account: "default"
	I0916 10:49:31.413236   40910 default_sa.go:55] duration metric: took 2.424518ms for default service account to be created ...
	I0916 10:49:31.413244   40910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:49:31.566665   40910 request.go:632] Waited for 153.359422ms due to client-side throttling, not priority and fairness, request: GET:https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.566718   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.566723   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.566730   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.566735   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.569281   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.569304   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.569311   40910 round_trippers.go:580]     Audit-Id: 386ae252-9edd-4dbb-81ae-7c9910b78122
	I0916 10:49:31.569315   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.569318   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.569321   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.569323   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.569326   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.569892   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53224 chars]
	I0916 10:49:31.571521   40910 system_pods.go:86] 7 kube-system pods found
	I0916 10:49:31.571550   40910 system_pods.go:89] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:49:31.571556   40910 system_pods.go:89] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:49:31.571561   40910 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [d9fac362-fee0-4ee4-9a06-22b343085d2d] Running
	I0916 10:49:31.571566   40910 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:49:31.571570   40910 system_pods.go:89] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:49:31.571574   40910 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:49:31.571581   40910 system_pods.go:89] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:31.571591   40910 system_pods.go:126] duration metric: took 158.342376ms to wait for k8s-apps to be running ...
	I0916 10:49:31.571602   40910 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:49:31.571647   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:31.584366   40910 system_svc.go:56] duration metric: took 12.755896ms WaitForService to wait for kubelet
	I0916 10:49:31.584391   40910 kubeadm.go:582] duration metric: took 16.705915399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:31.584407   40910 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:49:31.766809   40910 request.go:632] Waited for 182.321668ms due to client-side throttling, not priority and fairness, request: GET:https://10.138.0.48:8441/api/v1/nodes
	I0916 10:49:31.766863   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes
	I0916 10:49:31.766868   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.766875   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.766878   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.769413   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.769431   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.769438   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.769442   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.769448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.769454   40910 round_trippers.go:580]     Audit-Id: 5050ce4a-e361-49e4-87da-8631e833fb0a
	I0916 10:49:31.769458   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.769462   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.769624   40910 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{
"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024 [truncated 8423 chars]
	I0916 10:49:31.770045   40910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:31.770069   40910 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:31.770080   40910 node_conditions.go:105] duration metric: took 185.6687ms to run NodePressure ...
	I0916 10:49:31.770090   40910 start.go:241] waiting for startup goroutines ...
	I0916 10:49:31.770097   40910 start.go:246] waiting for cluster config update ...
	I0916 10:49:31.770106   40910 start.go:255] writing updated cluster config ...
	I0916 10:49:31.770345   40910 exec_runner.go:51] Run: rm -f paused
	I0916 10:49:31.774603   40910 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:49:31.775891   40910 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:49:32 UTC. --
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:49:13 ubuntu-20-agent-2 cri-dockerd[39148]: time="2024-09-16T10:49:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b9df597ae39c417a09955b7152d786e4b3098b8c35431d4eda14b67a7326566\""
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: cri-docker.service: Succeeded.
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 16 10:49:14 ubuntu-20-agent-2 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Start docker client with request timeout 0s"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Loaded network plugin cni"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 16 10:49:14 ubuntu-20-agent-2 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dc3e2cee9ae5f57aadbc2aaceeb4eab6703250b588a22cbe45191fdfd498d95d/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28927fc2d6545e5de958c3a564755d6cc294c19270fbd681fecefdc67d9960c8/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b51e183b7b46cb84c0a36aeef87ab5db48a381bf69bd9789f03783caeb9979c6/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 dockerd[41620]: time="2024-09-16T10:49:15.620133539Z" level=info msg="ignoring event" container=0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:19 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6af15c63a0094873696c63bdb5039e18197b9b2cabbc974c70cac80073df9cb5/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a45299c063bb1       c69fa2e9cbf5f       13 seconds ago      Running             coredns                   1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	0d522fc642e51       6e38f40d628db       17 seconds ago      Exited              storage-provisioner       2                   b51e183b7b46c       storage-provisioner
	ff9c282d39039       2e96e5913fc06       17 seconds ago      Running             etcd                      1                   ad166eb13016a       etcd-ubuntu-20-agent-2
	552dd24d3b02d       60c005f310ff3       17 seconds ago      Running             kube-proxy                1                   dc3e2cee9ae5f       kube-proxy-lt5f5
	67e355cfcbda0       6bab7719df100       17 seconds ago      Running             kube-apiserver            1                   28927fc2d6545       kube-apiserver-ubuntu-20-agent-2
	bd9bbeacd72df       9aa1fad941575       17 seconds ago      Running             kube-scheduler            1                   59ae2583e1f56       kube-scheduler-ubuntu-20-agent-2
	76c209608f0b3       175ffd71cce3d       17 seconds ago      Running             kube-controller-manager   1                   317985ddf47a1       kube-controller-manager-ubuntu-20-agent-2
	458949ce6fd13       c69fa2e9cbf5f       36 seconds ago      Exited              coredns                   0                   6b9df597ae39c       coredns-7c65d6cfc9-9tmvq
	5d4b6365fb999       60c005f310ff3       36 seconds ago      Exited              kube-proxy                0                   dc4e1eb7881a9       kube-proxy-lt5f5
	8949fc35206b3       2e96e5913fc06       46 seconds ago      Exited              etcd                      0                   33693827aa1af       etcd-ubuntu-20-agent-2
	8b95544e0ae0c       9aa1fad941575       46 seconds ago      Exited              kube-scheduler            0                   75baf2b9ae9f6       kube-scheduler-ubuntu-20-agent-2
	a84496f2946e5       6bab7719df100       46 seconds ago      Exited              kube-apiserver            0                   a1b484ea8be60       kube-apiserver-ubuntu-20-agent-2
	043a8354243a6       175ffd71cce3d       46 seconds ago      Exited              kube-controller-manager   0                   cb842334bb4ef       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [458949ce6fd1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44547 - 2953 "HINFO IN 7152552342506087924.8521799898990137584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018204297s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:49:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     37s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 36s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 42s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 42s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  42s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           38s   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           11s   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	
	
	==> etcd [8949fc35206b] <==
	{"level":"info","ts":"2024-09-16T10:48:47.120366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:48:47.120375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:48:47.121315Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.121526Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:48:47.121550Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:48:47.121531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:48:47.121866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:48:47.121923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:48:47.121993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122082Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122675Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:48:47.122722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:48:47.123483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:48:47.123950Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:02.638546Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:49:02.638610Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ubuntu-20-agent-2","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"]}
	{"level":"warn","ts":"2024-09-16T10:49:02.638703Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 10.138.0.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.638776Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 10.138.0.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.640558Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.640658Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:49:02.664428Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b435b960bec7c3c","current-leader-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:02.666169Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:02.666259Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:02.666270Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ubuntu-20-agent-2","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"]}
	
	
	==> etcd [ff9c282d3903] <==
	{"level":"info","ts":"2024-09-16T10:49:15.690895Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:15.691581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:15.691652Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:15.692508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:15.694942Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:15.695055Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:15.695077Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:15.695182Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:15.695210Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:16.982566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.985345Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:16.985369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:16.985345Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:16.985594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:16.985619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:16.986983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:16.987215Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:16.988059Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:16.988378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:49:32 up 32 min,  0 users,  load average: 0.94, 0.45, 0.27
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [67e355cfcbda] <==
	I0916 10:49:17.820446       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0916 10:49:17.820454       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:49:17.819971       1 controller.go:119] Starting legacy_token_tracking_controller
	I0916 10:49:17.820471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0916 10:49:17.920209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:49:17.920325       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:49:17.920379       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:17.920500       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:17.920513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:17.920635       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:17.920648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:17.920636       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:17.920690       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:17.920701       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:17.920707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:17.920715       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:17.929865       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0916 10:49:17.930228       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:17.933615       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:49:17.936733       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:49:17.936760       1 policy_source.go:224] refreshing policies
	I0916 10:49:17.942076       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:18.823613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:21.505222       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:49:21.555277       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a84496f2946e] <==
	W0916 10:49:11.889275       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:11.971366       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:11.983065       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.025617       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.041054       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.069465       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.100121       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.120910       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.155119       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.171966       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.236000       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.307049       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.318425       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.344361       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.345630       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.357221       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.358492       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.364943       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.376569       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.433424       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.472392       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.509051       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.541793       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.635078       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.653468       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [043a8354243a] <==
	I0916 10:48:54.721782       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:48:54.721795       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:48:54.721803       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:48:54.727948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ubuntu-20-agent-2" podCIDRs=["10.244.0.0/24"]
	I0916 10:48:54.727974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:54.728102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:54.769897       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:48:54.867735       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:48:54.917646       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:48:54.922999       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:48:54.923061       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:48:54.927848       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:48:55.337805       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:48:55.366499       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:48:55.366531       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:48:55.480435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:55.843059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="417.548288ms"
	I0916 10:48:55.852090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.982962ms"
	I0916 10:48:55.855974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.841817ms"
	I0916 10:48:55.856069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.897µs"
	I0916 10:48:56.545846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="194.1µs"
	I0916 10:48:57.582160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.052µs"
	I0916 10:48:57.587309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.871µs"
	I0916 10:48:57.590430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.694µs"
	I0916 10:49:00.899110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-controller-manager [76c209608f0b] <==
	I0916 10:49:21.202394       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 10:49:21.202575       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:49:21.202574       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:49:21.207037       1 shared_informer.go:320] Caches are synced for node
	I0916 10:49:21.207105       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:49:21.207106       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:21.207156       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:49:21.207161       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:49:21.207168       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:49:21.207222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:49:21.207642       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:21.209151       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:49:21.210332       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:49:21.210560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.215646ms"
	I0916 10:49:21.210735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.532µs"
	I0916 10:49:21.212573       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:49:21.252382       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:49:21.283055       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0916 10:49:21.391324       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:49:21.402727       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:49:21.406995       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:49:21.452298       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:49:21.822159       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:21.855859       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:21.855886       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [552dd24d3b02] <==
	I0916 10:49:15.706406       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:17.853578       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:17.853659       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:17.900242       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:17.900311       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:17.903531       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:17.903908       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:17.903945       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:17.905542       1 config.go:328] "Starting node config controller"
	I0916 10:49:17.905565       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:17.905768       1 config.go:199] "Starting service config controller"
	I0916 10:49:17.905783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:17.905828       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:17.906166       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:18.006137       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:18.006194       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:18.007364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5d4b6365fb99] <==
	I0916 10:48:56.237395       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:48:56.327177       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:48:56.327237       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:48:56.348155       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:48:56.348239       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:48:56.350670       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:48:56.351104       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:48:56.351137       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:48:56.352578       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:48:56.352619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:48:56.352651       1 config.go:199] "Starting service config controller"
	I0916 10:48:56.352661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:48:56.353009       1 config.go:328] "Starting node config controller"
	I0916 10:48:56.353023       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:48:56.452809       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:48:56.452812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:48:56.453083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b95544e0ae0] <==
	E0916 10:48:47.999867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:47.999760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:48:47.999898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.817391       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:48:48.817439       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:48:48.932232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:48.932274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.969626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:48:48.969664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.976089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:48:48.976142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.046101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:48:49.046157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.072535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.072575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.117363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.117402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.119092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:48:49.119120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.152686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.152732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:48:50.595595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:02.635938       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:49:02.636080       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 10:49:02.636275       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bd9bbeacd72d] <==
	I0916 10:49:15.971969       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:17.842625       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:17.842666       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:49:17.842682       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:17.842691       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:17.875058       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:17.875385       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:17.878777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:17.878838       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:17.878894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:17.878920       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:17.979658       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:49:32 UTC. --
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.175494   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.179369   40049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180073   40049 status_manager.go:851] "Failed to get status for pod" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180388   40049 status_manager.go:851] "Failed to get status for pod" podUID="2e01c31f-c798-45c0-98a2-ee94c3b9d631" pod="kube-system/kube-proxy-lt5f5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180684   40049 status_manager.go:851] "Failed to get status for pod" podUID="64b157a7-a274-493f-ad2d-3eb841c345db" pod="kube-system/coredns-7c65d6cfc9-9tmvq" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180906   40049 status_manager.go:851] "Failed to get status for pod" podUID="a5ababb2af12b481e591ddfe93ae3b1f" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181122   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181407   40049 status_manager.go:851] "Failed to get status for pod" podUID="5b137b06bdfaed6743b655439322dfe0" pod="kube-system/etcd-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181670   40049 status_manager.go:851] "Failed to get status for pod" podUID="ccbff5351fb3e01bcec8c471c38698f0" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.191939   40049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d1d58f49444d76811be9a80b2bfc8ab683f3b2f0db60a7ce1a40530a024e6e"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.192925   40049 status_manager.go:851] "Failed to get status for pod" podUID="5b137b06bdfaed6743b655439322dfe0" pod="kube-system/etcd-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.193316   40049 status_manager.go:851] "Failed to get status for pod" podUID="ccbff5351fb3e01bcec8c471c38698f0" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.193623   40049 status_manager.go:851] "Failed to get status for pod" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.194329   40049 status_manager.go:851] "Failed to get status for pod" podUID="2e01c31f-c798-45c0-98a2-ee94c3b9d631" pod="kube-system/kube-proxy-lt5f5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.194705   40049 status_manager.go:851] "Failed to get status for pod" podUID="64b157a7-a274-493f-ad2d-3eb841c345db" pod="kube-system/coredns-7c65d6cfc9-9tmvq" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.195033   40049 status_manager.go:851] "Failed to get status for pod" podUID="a5ababb2af12b481e591ddfe93ae3b1f" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.195349   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.219023   40049 scope.go:117] "RemoveContainer" containerID="2d84812a1876e909acb666fe34bc9157c82cec862fdaf46f48e283ad4b6e3073"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.219416   40049 scope.go:117] "RemoveContainer" containerID="0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:16.219612   40049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dfe4a726-3764-4daf-a322-8f33ae3528f7)\"" pod="kube-system/storage-provisioner" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.230467   40049 scope.go:117] "RemoveContainer" containerID="ca797a7433e09b256591c0abd395d30383489ab3e33095f655f88ed7ba38bed7"
	Sep 16 10:49:17 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:17.836079   40049 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 16 10:49:17 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:17.838190   40049 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 16 10:49:29 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:29.473542   40049 scope.go:117] "RemoveContainer" containerID="0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d"
	Sep 16 10:49:29 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:29.473730   40049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dfe4a726-3764-4daf-a322-8f33ae3528f7)\"" pod="kube-system/storage-provisioner" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7"
	
	
	==> storage-provisioner [0d522fc642e5] <==
	I0916 10:49:15.582187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:49:15.584859       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (466.109µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubeContext (1.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context minikube get po -A: fork/exec /usr/local/bin/kubectl: exec format error (361.429µs)
functional_test.go:698: failed to get kubectl pods: args "kubectl --context minikube get po -A" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context minikube get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:23 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|         | helm-tiller --alsologtostderr        |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | enable headlamp -p minikube          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | headlamp --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | -p minikube                          |          |         |         |                     |                     |
	| addons  | minikube addons disable yakd         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| stop    | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable gvisor -p minikube           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start   | -p minikube --memory=2048            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|         | --cert-expiration=3m                 |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| start   | -p minikube --memory=2048            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|         | --cert-expiration=8760h              |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start   | -p minikube --memory=4000            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|         | --apiserver-port=8441                |          |         |         |                     |                     |
	|         | --wait=all --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | -v=8                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:49:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:49:01.151961   40910 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:01.152095   40910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:01.152107   40910 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:01.152112   40910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:01.152289   40910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:49:01.152830   40910 out.go:352] Setting JSON to false
	I0916 10:49:01.154034   40910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1892,"bootTime":1726481849,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:49:01.154131   40910 start.go:139] virtualization: kvm guest
	I0916 10:49:01.156584   40910 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:49:01.158407   40910 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:49:01.158430   40910 notify.go:220] Checking for updates...
	W0916 10:49:01.158432   40910 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:49:01.160643   40910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:49:01.161920   40910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:01.163203   40910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:49:01.164512   40910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:49:01.165743   40910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:49:01.167548   40910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:49:01.167660   40910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:49:01.168150   40910 exec_runner.go:51] Run: systemctl --version
	I0916 10:49:01.181781   40910 out.go:177] * Using the none driver based on existing profile
	I0916 10:49:01.183300   40910 start.go:297] selected driver: none
	I0916 10:49:01.183319   40910 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:01.183453   40910 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:49:01.183502   40910 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:49:01.185287   40910 cni.go:84] Creating CNI manager for ""
	I0916 10:49:01.185375   40910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:01.185448   40910 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:01.187093   40910 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:49:01.188500   40910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:49:01.188773   40910 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:49:01.188890   40910 start.go:364] duration metric: took 76.273µs to acquireMachinesLock for "minikube"
	I0916 10:49:01.188913   40910 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:01.188925   40910 fix.go:54] fixHost starting: 
	I0916 10:49:01.189892   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:01.189915   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:01.189961   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:01.209135   40910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/39915/cgroup
	I0916 10:49:01.220119   40910 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/a84496f2946e5428a577f4d4bdcfe2c49204cca7acad6168eb47dea051942fe4"
	I0916 10:49:01.220183   40910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/a84496f2946e5428a577f4d4bdcfe2c49204cca7acad6168eb47dea051942fe4/freezer.state
	I0916 10:49:01.228949   40910 api_server.go:204] freezer state: "THAWED"
	I0916 10:49:01.228996   40910 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:01.232514   40910 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:01.232545   40910 fix.go:112] recreateIfNeeded on minikube: state=Running err=<nil>
	W0916 10:49:01.232554   40910 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:01.234457   40910 out.go:177] * Updating the running none "minikube" bare metal machine ...
	I0916 10:49:01.235710   40910 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:49:01.235759   40910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:01.235801   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:01.248549   40910 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:49:01.248572   40910 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:49:01.248580   40910 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:49:01.250269   40910 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:49:01.251512   40910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:49:01.251582   40910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:49:01.251665   40910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:49:01.251676   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> /etc/ssl/certs/110572.pem
	I0916 10:49:01.251744   40910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> hosts in /etc/test/nested/copy/11057
	I0916 10:49:01.251751   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.251794   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11057
	I0916 10:49:01.259798   40910 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:49:01.259817   40910 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:49:01.259849   40910 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:49:01.269468   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:01.269641   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube732688499 /etc/ssl/certs/110572.pem
	I0916 10:49:01.277772   40910 exec_runner.go:144] found /etc/test/nested/copy/11057/hosts, removing ...
	I0916 10:49:01.277791   40910 exec_runner.go:203] rm: /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.277819   40910 exec_runner.go:51] Run: sudo rm -f /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.285197   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts --> /etc/test/nested/copy/11057/hosts (40 bytes)
	I0916 10:49:01.285316   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube465686449 /etc/test/nested/copy/11057/hosts
	I0916 10:49:01.293938   40910 start.go:296] duration metric: took 58.209304ms for postStartSetup
	I0916 10:49:01.293964   40910 fix.go:56] duration metric: took 105.040267ms for fixHost
	I0916 10:49:01.293973   40910 start.go:83] releasing machines lock for "minikube", held for 105.068271ms
	I0916 10:49:01.294137   40910 interface.go:432] Looking for default routes with IPv4 addresses
	I0916 10:49:01.294148   40910 interface.go:437] Default route transits interface "ens4"
	I0916 10:49:01.294329   40910 interface.go:209] Interface ens4 is up
	I0916 10:49:01.294389   40910 interface.go:257] Interface "ens4" has 2 addresses :[10.138.0.48/32 fe80::4001:aff:fe8a:30/64].
	I0916 10:49:01.294426   40910 interface.go:224] Checking addr  10.138.0.48/32.
	I0916 10:49:01.294439   40910 interface.go:231] IP found 10.138.0.48
	I0916 10:49:01.294453   40910 interface.go:263] Found valid IPv4 address 10.138.0.48 for interface "ens4".
	I0916 10:49:01.294464   40910 interface.go:443] Found active IP 10.138.0.48 
	I0916 10:49:01.294551   40910 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:49:01.294609   40910 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:49:01.296373   40910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:01.296419   40910 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:01.304778   40910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:01.304804   40910 start.go:495] detecting cgroup driver to use...
	I0916 10:49:01.304834   40910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:01.304933   40910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:01.321651   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:49:01.330364   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:49:01.338939   40910 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:49:01.339015   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:49:01.347758   40910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:01.356238   40910 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:49:01.365789   40910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:01.375456   40910 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:01.383147   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:49:01.392828   40910 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:49:01.401464   40910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:49:01.409759   40910 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:01.416630   40910 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:01.423420   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:01.671116   40910 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:49:01.835580   40910 start.go:495] detecting cgroup driver to use...
	I0916 10:49:01.835628   40910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:01.835789   40910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:01.855770   40910 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:49:01.856677   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:49:01.865411   40910 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:49:01.865433   40910 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.865469   40910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.873087   40910 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:49:01.873214   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2891534564 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:01.880726   40910 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:49:02.118649   40910 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:49:02.355022   40910 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:49:02.355171   40910 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:49:02.355186   40910 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:49:02.355227   40910 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:49:02.364314   40910 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:49:02.364450   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2214450792 /etc/docker/daemon.json
	I0916 10:49:02.372419   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:02.612083   40910 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:49:13.102098   40910 exec_runner.go:84] Completed: sudo systemctl restart docker: (10.489964604s)
	I0916 10:49:13.102166   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:49:13.117386   40910 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:49:13.150852   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:13.163093   40910 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:49:13.380641   40910 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:49:13.597912   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:13.823840   40910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:49:13.841381   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:13.854143   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:14.070802   40910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:49:14.139874   40910 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:49:14.139951   40910 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:49:14.141296   40910 start.go:563] Will wait 60s for crictl version
	I0916 10:49:14.141344   40910 exec_runner.go:51] Run: which crictl
	I0916 10:49:14.142223   40910 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:49:14.171538   40910 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:49:14.171592   40910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:14.194015   40910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:14.216117   40910 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:49:14.216210   40910 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:14.218934   40910 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:49:14.220124   40910 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:14.220241   40910 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:49:14.220272   40910 kubeadm.go:934] updating node { 10.138.0.48 8441 v1.31.1 docker true true} ...
	I0916 10:49:14.220365   40910 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:49:14.220420   40910 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:49:14.268128   40910 cni.go:84] Creating CNI manager for ""
	I0916 10:49:14.268156   40910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:14.268166   40910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:14.268187   40910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:14.268353   40910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:14.268417   40910 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:14.277293   40910 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:14.277344   40910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:49:14.285281   40910 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:49:14.285302   40910 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.285345   40910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.292474   40910 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:49:14.292596   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3131872588 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:14.299866   40910 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:49:14.299897   40910 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:49:14.299936   40910 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:49:14.307476   40910 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:14.307590   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3779848822 /lib/systemd/system/kubelet.service
	I0916 10:49:14.315642   40910 exec_runner.go:144] found /var/tmp/minikube/kubeadm.yaml.new, removing ...
	I0916 10:49:14.315659   40910 exec_runner.go:203] rm: /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.315686   40910 exec_runner.go:51] Run: sudo rm -f /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.322574   40910 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:49:14.322721   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1661355805 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.331244   40910 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:14.332560   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:14.544862   40910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:14.556727   40910 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:49:14.556749   40910 certs.go:194] generating shared ca certs ...
	I0916 10:49:14.556768   40910 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.556918   40910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:49:14.556972   40910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:49:14.556986   40910 certs.go:256] generating profile certs ...
	I0916 10:49:14.557130   40910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:49:14.557208   40910 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:49:14.557258   40910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:49:14.557271   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.557288   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:14.557305   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.557325   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.557341   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.557361   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.557378   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.557396   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.557464   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem (1338 bytes)
	W0916 10:49:14.557505   40910 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:14.557518   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:49:14.557553   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:49:14.557586   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:14.557620   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:14.557675   40910 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:14.557723   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.557744   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.557762   40910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem -> /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.558297   40910 exec_runner.go:144] found /var/lib/minikube/certs/ca.crt, removing ...
	I0916 10:49:14.558311   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.558352   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.566572   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:14.566718   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1119541196 /var/lib/minikube/certs/ca.crt
	I0916 10:49:14.575377   40910 exec_runner.go:144] found /var/lib/minikube/certs/ca.key, removing ...
	I0916 10:49:14.575398   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.key
	I0916 10:49:14.575438   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.key
	I0916 10:49:14.582900   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:49:14.583058   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3054346620 /var/lib/minikube/certs/ca.key
	I0916 10:49:14.591208   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.crt, removing ...
	I0916 10:49:14.591229   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.591261   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.598888   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:14.599014   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube717466652 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:14.607239   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.key, removing ...
	I0916 10:49:14.607262   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.607305   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.614584   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:14.614726   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2863523917 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:14.622448   40910 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.crt, removing ...
	I0916 10:49:14.622465   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.622499   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.629432   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:49:14.629559   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2028355538 /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:14.637572   40910 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.key, removing ...
	I0916 10:49:14.637591   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.637619   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.644355   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:14.644484   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2352688056 /var/lib/minikube/certs/apiserver.key
	I0916 10:49:14.652620   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.crt, removing ...
	I0916 10:49:14.652636   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.652676   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.659675   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:14.659789   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3054953620 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:14.667727   40910 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.key, removing ...
	I0916 10:49:14.667743   40910 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.667769   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.675532   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:49:14.675648   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube447743794 /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:14.683024   40910 exec_runner.go:144] found /usr/share/ca-certificates/110572.pem, removing ...
	I0916 10:49:14.683043   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.683069   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.690871   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /usr/share/ca-certificates/110572.pem (1708 bytes)
	I0916 10:49:14.691061   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube963407501 /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.698372   40910 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:49:14.698390   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.698421   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.705324   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:14.705446   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1523262685 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.712576   40910 exec_runner.go:144] found /usr/share/ca-certificates/11057.pem, removing ...
	I0916 10:49:14.712591   40910 exec_runner.go:203] rm: /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.712619   40910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.720442   40910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem --> /usr/share/ca-certificates/11057.pem (1338 bytes)
	I0916 10:49:14.720558   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube233605773 /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.727822   40910 exec_runner.go:144] found /var/lib/minikube/kubeconfig, removing ...
	I0916 10:49:14.727837   40910 exec_runner.go:203] rm: /var/lib/minikube/kubeconfig
	I0916 10:49:14.727863   40910 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/kubeconfig
	I0916 10:49:14.735069   40910 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:14.735193   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3783177391 /var/lib/minikube/kubeconfig
	I0916 10:49:14.742149   40910 exec_runner.go:51] Run: openssl version
	I0916 10:49:14.744789   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:14.753163   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.754466   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.754501   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:14.757166   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:14.765673   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11057.pem && ln -fs /usr/share/ca-certificates/11057.pem /etc/ssl/certs/11057.pem"
	I0916 10:49:14.783913   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.785237   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1338 Sep 16 10:49 /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.785283   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11057.pem
	I0916 10:49:14.788160   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11057.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:14.796603   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110572.pem && ln -fs /usr/share/ca-certificates/110572.pem /etc/ssl/certs/110572.pem"
	I0916 10:49:14.804481   40910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.805685   40910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1708 Sep 16 10:49 /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.805770   40910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110572.pem
	I0916 10:49:14.808472   40910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110572.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:14.815668   40910 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:14.816908   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:14.819662   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:14.822313   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:14.824912   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:14.827464   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:14.830057   40910 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:14.832590   40910 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:14.832711   40910 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:14.848598   40910 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:49:14.856680   40910 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:49:14.856696   40910 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:49:14.856734   40910 exec_runner.go:51] Run: sudo test -d /data/minikube
	I0916 10:49:14.863756   40910 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.864097   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.864491   40910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:14.864741   40910 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:49:14.865225   40910 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:49:14.865426   40910 exec_runner.go:51] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:14.872764   40910 kubeadm.go:630] The running cluster does not require reconfiguration: 10.138.0.48
	I0916 10:49:14.872792   40910 kubeadm.go:597] duration metric: took 16.091162ms to restartPrimaryControlPlane
	I0916 10:49:14.872800   40910 kubeadm.go:394] duration metric: took 40.215274ms to StartCluster
	I0916 10:49:14.872816   40910 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.872873   40910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:14.873412   40910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:14.873745   40910 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:49:14.873830   40910 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:49:14.873846   40910 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:49:14.873849   40910 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:49:14.873876   40910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:49:14.873913   40910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 10:49:14.873854   40910 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:49:14.873991   40910 host.go:66] Checking if "minikube" exists ...
	I0916 10:49:14.874348   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.874364   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:14.874392   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:14.874445   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:14.874458   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:14.874478   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:14.876658   40910 out.go:177] * Configuring local host environment ...
	W0916 10:49:14.878282   40910 out.go:270] * 
	W0916 10:49:14.878299   40910 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:49:14.878305   40910 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:49:14.878310   40910 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:49:14.878319   40910 out.go:270] * 
	W0916 10:49:14.878357   40910 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:49:14.878367   40910 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:49:14.878373   40910 out.go:270] * 
	W0916 10:49:14.878400   40910 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:49:14.878413   40910 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:49:14.878418   40910 out.go:270] * 
	W0916 10:49:14.878422   40910 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:49:14.878447   40910 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:49:14.879746   40910 out.go:177] * Verifying Kubernetes components...
	I0916 10:49:14.881383   40910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	W0916 10:49:14.891622   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.891682   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	W0916 10:49:14.892731   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:14.892785   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:15.116602   40910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:15.122178   40910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:15.122498   40910 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:49:15.122771   40910 addons.go:234] Setting addon default-storageclass=true in "minikube"
	W0916 10:49:15.122789   40910 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:49:15.122816   40910 host.go:66] Checking if "minikube" exists ...
	I0916 10:49:15.123334   40910 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:15.123351   40910 api_server.go:166] Checking apiserver status ...
	I0916 10:49:15.123382   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:15.124266   40910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:49:15.125971   40910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.125996   40910 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:49:15.126002   40910 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.126029   40910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.128867   40910 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:49:15.128987   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:15.128997   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:15.129009   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:15.129015   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:15.129222   40910 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:49:15.129236   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:15.133877   40910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:49:15.134031   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2331610739 /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 10:49:15.139092   40910 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:15.139135   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:15.142164   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.148974   40910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.148997   40910 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:49:15.149003   40910 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.149044   40910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.156633   40910 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:49:15.156903   40910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4183224578 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.167362   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0916 10:49:15.224893   40910 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.224933   40910 retry.go:31] will retry after 338.203366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 10:49:15.257912   40910 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.257952   40910 retry.go:31] will retry after 323.835935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 10:49:15.563337   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:49:15.585866   40910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:49:15.631299   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:15.631323   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:15.631331   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:15.631335   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:15.631599   40910 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:49:15.631623   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:16.129460   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:16.129488   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:16.129500   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:16.129505   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.840845   40910 round_trippers.go:574] Response Status: 200 OK in 1711 milliseconds
	I0916 10:49:17.840871   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.840882   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.840887   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.840894   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.840898   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.840902   40910 round_trippers.go:580]     Audit-Id: 0c979e5c-932a-459f-ab9c-9cd0ae9b5400
	I0916 10:49:17.840906   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.841053   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:17.842042   40910 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:49:17.842064   40910 node_ready.go:38] duration metric: took 2.713160156s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:49:17.842077   40910 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:17.842153   40910 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:49:17.842166   40910 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:49:17.842232   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:17.842239   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.842249   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.842255   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.849622   40910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:49:17.849648   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.849658   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.849665   40910 round_trippers.go:580]     Audit-Id: 18da7c90-93f3-4739-be80-a1dbd645cd92
	I0916 10:49:17.849669   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.849672   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.849677   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.849680   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.850491   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"393"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"365","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51819 chars]
	I0916 10:49:17.854842   40910 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:17.854923   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:17.854935   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.854945   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.854950   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.856679   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:17.856694   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.856701   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.856704   40910 round_trippers.go:580]     Audit-Id: aedb8f86-8d36-4b53-9f18-beaaa7217748
	I0916 10:49:17.856709   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.856713   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.856717   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.856721   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.856848   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"365","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6725 chars]
	I0916 10:49:17.857363   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:17.857379   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.857387   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.857390   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.862681   40910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:49:17.862697   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.862706   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.862714   40910 round_trippers.go:580]     Audit-Id: 5693cdba-39e6-4bc8-adc4-8bf7c8200ae9
	I0916 10:49:17.862719   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.862723   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.862727   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:49:17.862732   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:49:17.863186   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:17.922804   40910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.336891591s)
	I0916 10:49:17.922941   40910 round_trippers.go:463] GET https://10.138.0.48:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:49:17.922953   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.922965   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:17.922977   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.930707   40910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:49:17.930728   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:17.930737   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:17.930743   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:17.930748   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:17.930758   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:17.930763   40910 round_trippers.go:580]     Content-Length: 1273
	I0916 10:49:17.930770   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:17 GMT
	I0916 10:49:17.930774   40910 round_trippers.go:580]     Audit-Id: 9e077ef1-e7db-4bed-bcd1-b27a8d302926
	I0916 10:49:17.930837   40910 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:49:17.931396   40910 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:49:17.931460   40910 round_trippers.go:463] PUT https://10.138.0.48:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:49:17.931468   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:17.931478   40910 round_trippers.go:473]     Content-Type: application/json
	I0916 10:49:17.931483   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:17.931487   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.035155   40910 round_trippers.go:574] Response Status: 200 OK in 103 milliseconds
	I0916 10:49:18.035193   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.035203   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.035208   40910 round_trippers.go:580]     Audit-Id: 6faa9ba8-e9c3-4c46-82a8-79a43344462f
	I0916 10:49:18.035212   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.035217   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.035220   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.035226   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.035229   40910 round_trippers.go:580]     Content-Length: 1220
	I0916 10:49:18.035442   40910 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6453ef1-d9d2-49dc-afbd-f07eda085888","resourceVersion":"311","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:49:18.343579   40910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.78019227s)
	I0916 10:49:18.345676   40910 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:49:18.347010   40910 addons.go:510] duration metric: took 3.473261973s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:49:18.355794   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:18.355811   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.355820   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.355824   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.357814   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.357833   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.357843   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.357848   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.357853   40910 round_trippers.go:580]     Audit-Id: cb0a3e06-0913-45d3-8d44-f2a4fcf53152
	I0916 10:49:18.357857   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.357862   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.357867   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.358031   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:18.358534   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:18.358551   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.358559   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.358563   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.360229   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.360334   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.360346   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.360352   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.360355   40910 round_trippers.go:580]     Audit-Id: d9d563f3-9212-4e1f-8158-739186734848
	I0916 10:49:18.360358   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.360362   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.360366   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.360461   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:18.855630   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:18.855657   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.855668   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.855673   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.857346   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.857375   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.857383   40910 round_trippers.go:580]     Audit-Id: 61e10664-2cee-44a3-a164-49906cc3d58a
	I0916 10:49:18.857388   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.857392   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.857396   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.857400   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.857404   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.857495   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:18.858229   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:18.858249   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:18.858258   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:18.858269   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:18.859861   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:18.859881   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:18.859891   40910 round_trippers.go:580]     Audit-Id: 3985e165-2fe7-4da9-86fc-86dd41595480
	I0916 10:49:18.859896   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:18.859899   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:18.859904   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:18.859909   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:18.859914   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:18 GMT
	I0916 10:49:18.860081   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.355699   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:19.355722   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.355730   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.355734   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.357922   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:19.357944   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.357953   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.357958   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.357963   40910 round_trippers.go:580]     Audit-Id: cec66111-ab6c-4555-9a39-692cca3a9573
	I0916 10:49:19.357968   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.357973   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.357976   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.358146   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:19.358622   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:19.358634   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.358641   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.358646   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.360335   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.360350   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.360357   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.360360   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.360363   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.360366   40910 round_trippers.go:580]     Audit-Id: 38f5fa24-9fca-439f-bb7d-6079d2867123
	I0916 10:49:19.360368   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.360371   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.360535   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.855936   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:19.855963   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.855973   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.855981   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.857909   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.857936   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.857947   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.857955   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.857961   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.857965   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.857969   40910 round_trippers.go:580]     Audit-Id: 7a07f273-e266-4b67-a185-21bc296d6b62
	I0916 10:49:19.857973   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.858117   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"401","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6890 chars]
	I0916 10:49:19.858719   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:19.858740   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:19.858748   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:19.858751   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:19.860339   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:19.860369   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:19.860379   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:19.860384   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:19.860390   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:19 GMT
	I0916 10:49:19.860394   40910 round_trippers.go:580]     Audit-Id: 090fe24b-bf36-4539-887d-23dafb158106
	I0916 10:49:19.860400   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:19.860406   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:19.860548   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:19.860972   40910 pod_ready.go:103] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:20.355067   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq
	I0916 10:49:20.355104   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.355113   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.355118   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.356778   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.356799   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.356809   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.356814   40910 round_trippers.go:580]     Audit-Id: ad3c2817-d8ee-4118-85dd-8a2dae9f77c7
	I0916 10:49:20.356818   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.356826   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.356830   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.356837   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.356925   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0916 10:49:20.357379   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.357392   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.357401   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.357405   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.358988   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.359006   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.359015   40910 round_trippers.go:580]     Audit-Id: 8c8d0dee-bcd9-4703-aae0-edd1d76ed8c5
	I0916 10:49:20.359020   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.359029   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.359034   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.359042   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.359050   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.359185   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:20.359542   40910 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:20.359558   40910 pod_ready.go:82] duration metric: took 2.504692215s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:20.359568   40910 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:20.359635   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:20.359652   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.359662   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.359673   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.361112   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.361130   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.361140   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.361146   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.361151   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.361156   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.361160   40910 round_trippers.go:580]     Audit-Id: 1919f5e2-5c80-4724-b16b-5d74564e1102
	I0916 10:49:20.361168   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.361311   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:20.361652   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.361664   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.361671   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.361676   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.362943   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.362962   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.362972   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.362978   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.362985   40910 round_trippers.go:580]     Audit-Id: 221062e0-12be-4e20-b2bd-9efd140cdd83
	I0916 10:49:20.362993   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.363001   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.363005   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.363119   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:20.859791   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:20.859816   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.859825   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.859829   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.861460   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.861477   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.861484   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.861490   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.861496   40910 round_trippers.go:580]     Audit-Id: 420aad55-3bd6-4e8d-acbf-7c2f06dc4d09
	I0916 10:49:20.861500   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.861504   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.861508   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.861606   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:20.862039   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:20.862058   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:20.862070   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:20.862078   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:20.863699   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:20.863712   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:20.863719   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:20.863725   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:20.863730   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:20.863736   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:20.863740   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:20 GMT
	I0916 10:49:20.863744   40910 round_trippers.go:580]     Audit-Id: dec8e4d0-fae1-4712-9bb6-32a7a9d67964
	I0916 10:49:20.863844   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:21.359869   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:21.359889   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.359896   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.359901   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.361802   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.361825   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.361835   40910 round_trippers.go:580]     Audit-Id: 433cf211-2bf2-42df-9c16-45c32524e267
	I0916 10:49:21.361841   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.361846   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.361850   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.361853   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.361857   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.361957   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:21.362473   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:21.362490   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.362500   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.362507   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.363914   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.363934   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.363942   40910 round_trippers.go:580]     Audit-Id: 2c4bd7fa-4b72-49b4-8b4e-e55b95e99270
	I0916 10:49:21.363946   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.363950   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.363954   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.363956   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.363960   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.364096   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:21.859752   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:21.859781   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.859790   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.859796   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.861861   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:21.861878   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.861884   40910 round_trippers.go:580]     Audit-Id: ac2efe12-3aa5-4cc7-8fd9-7cd02097a34b
	I0916 10:49:21.861888   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.861892   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.861896   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.861899   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.861902   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.861992   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:21.862371   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:21.862383   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:21.862389   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:21.862393   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:21.864309   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:21.864330   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:21.864339   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:21 GMT
	I0916 10:49:21.864345   40910 round_trippers.go:580]     Audit-Id: ab304f5a-44a3-4516-8132-bb28d212a0a6
	I0916 10:49:21.864350   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:21.864353   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:21.864358   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:21.864361   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:21.864452   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:22.359806   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:22.359826   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.359832   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.359836   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.361937   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:22.361957   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.361966   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.361971   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.361974   40910 round_trippers.go:580]     Audit-Id: f74d0130-daa6-443f-af26-3b3946bf48d8
	I0916 10:49:22.361976   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.361978   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.361981   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.362111   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:22.362621   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:22.362637   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.362645   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.362651   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.364360   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:22.364375   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.364381   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.364385   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.364388   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.364392   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.364395   40910 round_trippers.go:580]     Audit-Id: 50dd0252-9c6a-4eea-ab5f-8b9b9ecd38e5
	I0916 10:49:22.364398   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.364565   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:22.364938   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:22.860158   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:22.860178   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.860186   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.860191   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.863070   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:22.863093   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.863102   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.863108   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.863114   40910 round_trippers.go:580]     Audit-Id: 83fd933e-08cf-4ab2-a969-79221445ce39
	I0916 10:49:22.863119   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.863122   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.863125   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.863284   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:22.863711   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:22.863726   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:22.863735   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:22.863741   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:22.865687   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:22.865726   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:22.865736   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:22.865742   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:22.865747   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:22.865751   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:22.865755   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:22 GMT
	I0916 10:49:22.865762   40910 round_trippers.go:580]     Audit-Id: cf9c05b9-a6fd-41e2-9e13-57c810547f6d
	I0916 10:49:22.865896   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:23.360518   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:23.360541   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.360550   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.360554   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.362886   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:23.362903   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.362910   40910 round_trippers.go:580]     Audit-Id: 07a4e463-176f-4454-97ec-0b78b4c7ca05
	I0916 10:49:23.362914   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.362917   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.362920   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.362923   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.362925   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.363051   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:23.363505   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:23.363516   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.363522   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.363525   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.365087   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:23.365100   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.365106   40910 round_trippers.go:580]     Audit-Id: 482db780-b1d8-47f3-97f8-d288082cfe7a
	I0916 10:49:23.365111   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.365116   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.365120   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.365124   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.365128   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.365316   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:23.859916   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:23.859942   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.859950   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.859955   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.862144   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:23.862163   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.862175   40910 round_trippers.go:580]     Audit-Id: ff969368-b1ed-4db1-a30f-c3d13e0f8ef6
	I0916 10:49:23.862183   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.862187   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.862192   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.862196   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.862200   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.862332   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:23.862764   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:23.862777   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:23.862785   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:23.862794   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:23.864625   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:23.864642   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:23.864650   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:23.864654   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:23.864660   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:23.864665   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:23.864670   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:23 GMT
	I0916 10:49:23.864674   40910 round_trippers.go:580]     Audit-Id: 4ffd8ec2-484c-4cfd-bae6-50df2ace71fe
	I0916 10:49:23.864778   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:24.360444   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:24.360466   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.360474   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.360479   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.362638   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:24.362660   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.362667   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.362673   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.362679   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.362686   40910 round_trippers.go:580]     Audit-Id: a98b4072-039d-4500-8e4d-1a25241af7a0
	I0916 10:49:24.362691   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.362694   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.362853   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:24.363275   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:24.363289   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.363295   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.363299   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.365187   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:24.365202   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.365209   40910 round_trippers.go:580]     Audit-Id: c2f1e152-ce36-4a0b-ba2f-935d53a3eac4
	I0916 10:49:24.365214   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.365220   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.365226   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.365230   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.365235   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.365353   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:24.365725   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:24.859970   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:24.860005   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.860013   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.860018   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.862184   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:24.862206   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.862222   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.862229   40910 round_trippers.go:580]     Audit-Id: 45b404e7-5972-43e6-866a-d34f739c24da
	I0916 10:49:24.862233   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.862238   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.862242   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.862247   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.862381   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:24.862887   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:24.862903   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:24.862913   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:24.862921   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:24.864699   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:24.864716   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:24.864723   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:24.864728   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:24.864731   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:24 GMT
	I0916 10:49:24.864737   40910 round_trippers.go:580]     Audit-Id: abda13ac-f0a9-46e8-9d2c-80b04566dfeb
	I0916 10:49:24.864741   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:24.864743   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:24.864882   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:25.360639   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:25.360661   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.360670   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.360674   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.362905   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:25.362929   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.362939   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.362945   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.362950   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.362955   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.362959   40910 round_trippers.go:580]     Audit-Id: e5c458b6-cb9c-44cb-bb96-41ccf94f251b
	I0916 10:49:25.362965   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.363105   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:25.363507   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:25.363520   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.363527   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.363531   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.365376   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:25.365392   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.365398   40910 round_trippers.go:580]     Audit-Id: 2b2f7729-14f1-46f6-ba9b-6e26d7245db4
	I0916 10:49:25.365403   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.365408   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.365414   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.365420   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.365423   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.365510   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:25.860100   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:25.860123   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.860131   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.860135   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.862284   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:25.862304   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.862312   40910 round_trippers.go:580]     Audit-Id: 9f536b90-6dea-429a-aca7-8533db52a1e1
	I0916 10:49:25.862319   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.862326   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.862331   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.862336   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.862341   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.862429   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:25.862830   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:25.862844   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:25.862853   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:25.862857   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:25.864414   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:25.864428   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:25.864434   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:25.864437   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:25.864450   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:25.864453   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:25.864456   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:25 GMT
	I0916 10:49:25.864458   40910 round_trippers.go:580]     Audit-Id: 02b76fec-5096-40b5-9b07-824da3de5d1e
	I0916 10:49:25.864580   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:26.360467   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:26.360488   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.360496   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.360500   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.362759   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:26.362781   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.362790   40910 round_trippers.go:580]     Audit-Id: d564d6a2-7d5a-42d1-b9fa-b76a94d78dfe
	I0916 10:49:26.362795   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.362800   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.362803   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.362807   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.362810   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.362974   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:26.363411   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:26.363427   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.363433   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.363438   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.365243   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:26.365257   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.365266   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.365272   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.365277   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.365280   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.365285   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.365289   40910 round_trippers.go:580]     Audit-Id: ac9c40fe-654d-43f4-9075-5a38c991f929
	I0916 10:49:26.365436   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:26.365842   40910 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:26.860024   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:26.860047   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.860055   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.860057   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.862151   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:26.862174   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.862185   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.862192   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.862198   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.862203   40910 round_trippers.go:580]     Audit-Id: d54ed749-7239-493d-90c4-b5b6f768e14a
	I0916 10:49:26.862207   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.862212   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.862335   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:26.862763   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:26.862776   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:26.862782   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:26.862786   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:26.864518   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:26.864532   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:26.864539   40910 round_trippers.go:580]     Audit-Id: 3aa86210-1512-451b-af62-bb440f2a5e34
	I0916 10:49:26.864542   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:26.864546   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:26.864548   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:26.864551   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:26.864554   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:26 GMT
	I0916 10:49:26.864676   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.360306   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:27.360326   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.360334   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.360338   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.362264   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.362295   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.362305   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.362310   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.362314   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.362318   40910 round_trippers.go:580]     Audit-Id: 6b4d4fe2-ff6b-4ea7-86e0-122bca1d12e8
	I0916 10:49:27.362324   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.362328   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.362426   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"411","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6881 chars]
	I0916 10:49:27.362914   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.362933   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.362943   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.362953   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.364573   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.364595   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.364604   40910 round_trippers.go:580]     Audit-Id: 258778c2-bfc9-4fdc-8444-b6649e01e846
	I0916 10:49:27.364609   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.364614   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.364621   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.364624   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.364629   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.364800   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.860464   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2
	I0916 10:49:27.860487   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.860495   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.860499   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.862476   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.862495   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.862504   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.862509   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.862513   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.862516   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.862521   40910 round_trippers.go:580]     Audit-Id: f46952ad-37b9-450b-8019-dd8789a2be40
	I0916 10:49:27.862527   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.862644   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-ubuntu-20-agent-2","namespace":"kube-system","uid":"3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb","resourceVersion":"482","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://10.138.0.48:2379","kubernetes.io/config.hash":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.mirror":"5b137b06bdfaed6743b655439322dfe0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043150835Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6657 chars]
	I0916 10:49:27.863068   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.863081   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.863087   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.863091   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.864607   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.864625   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.864633   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.864638   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.864642   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.864646   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.864650   40910 round_trippers.go:580]     Audit-Id: f5c0fbe7-4017-48fd-a03b-a1c27aeade11
	I0916 10:49:27.864655   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.864786   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:27.865215   40910 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:27.865233   40910 pod_ready.go:82] duration metric: took 7.505656859s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:27.865244   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:27.865318   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:27.865328   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.865337   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.865346   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.866792   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.866808   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.866817   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.866822   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.866827   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.866832   40910 round_trippers.go:580]     Audit-Id: 6a226f55-f577-4b15-a525-fee48a3732ca
	I0916 10:49:27.866835   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.866842   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.866962   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:27.867388   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:27.867401   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:27.867407   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:27.867411   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:27.868713   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:27.868730   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:27.868739   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:27.868743   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:27 GMT
	I0916 10:49:27.868748   40910 round_trippers.go:580]     Audit-Id: f60c7a2a-f2ee-4138-b1ed-d6f70ddd2fc1
	I0916 10:49:27.868753   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:27.868757   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:27.868762   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:27.868883   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:28.365685   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:28.365724   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.365735   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.365740   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.367832   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:28.367855   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.367865   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.367871   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.367875   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.367878   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.367885   40910 round_trippers.go:580]     Audit-Id: 3519a8ba-89e8-449c-a3ce-81d2d117013a
	I0916 10:49:28.367888   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.368091   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:28.368545   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:28.368560   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.368567   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.368570   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.370123   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:28.370137   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.370144   40910 round_trippers.go:580]     Audit-Id: 81c91e77-451a-4e82-8ad3-17058cb89bfb
	I0916 10:49:28.370148   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.370150   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.370153   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.370155   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.370160   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.370272   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:28.865847   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:28.865874   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.865879   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.865884   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.868017   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:28.868032   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.868037   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.868064   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.868069   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.868073   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.868077   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.868080   40910 round_trippers.go:580]     Audit-Id: 22ec9eaf-934f-40ee-aa27-698b4f811420
	I0916 10:49:28.868226   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:28.868648   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:28.868660   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:28.868665   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:28.868669   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:28.870273   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:28.870292   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:28.870297   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:28.870302   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:28.870304   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:28.870307   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:28 GMT
	I0916 10:49:28.870310   40910 round_trippers.go:580]     Audit-Id: 81eeb752-21c3-40bc-9727-aca860a81cdb
	I0916 10:49:28.870315   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:28.870509   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.366101   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:29.366124   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.366129   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.366134   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.368166   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:29.368185   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.368193   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.368198   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.368202   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.368207   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.368212   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.368216   40910 round_trippers.go:580]     Audit-Id: 9cd3e580-cdfb-423c-bc2f-6ea6d94d900b
	I0916 10:49:29.368357   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:29.368785   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:29.368800   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.368809   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.368816   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.370388   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:29.370409   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.370419   40910 round_trippers.go:580]     Audit-Id: 9bd8cb70-e364-45e1-8286-ea8e9be4eaef
	I0916 10:49:29.370426   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.370435   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.370439   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.370444   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.370448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.370577   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.866238   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:29.866273   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.866284   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.866290   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.868503   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:29.868527   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.868535   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.868541   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.868546   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.868550   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.868557   40910 round_trippers.go:580]     Audit-Id: cbc42e1f-b214-49e1-a3e2-b01bb04dc7fd
	I0916 10:49:29.868562   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.868738   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:29.869186   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:29.869199   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:29.869206   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:29.869211   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:29.870842   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:29.870873   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:29.870884   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:29.870891   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:29 GMT
	I0916 10:49:29.870899   40910 round_trippers.go:580]     Audit-Id: 7898da41-390b-4bc4-bbb1-ceb031be7790
	I0916 10:49:29.870907   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:29.870913   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:29.870920   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:29.871087   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:29.871450   40910 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:30.365532   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:30.365553   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.365563   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.365569   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.367877   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:30.367900   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.367909   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.367914   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.367918   40910 round_trippers.go:580]     Audit-Id: 709e831c-1b03-4e44-be9e-a492de5f1eb0
	I0916 10:49:30.367922   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.367925   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.367929   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.368417   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:30.369016   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:30.369033   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.369043   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.369057   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.370816   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:30.370831   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.370837   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.370840   40910 round_trippers.go:580]     Audit-Id: caa1d57b-b973-4df7-8a49-fe33023b8323
	I0916 10:49:30.370845   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.370847   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.370850   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.370853   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.370970   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:30.865613   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:30.865643   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.865653   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.865659   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.868081   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:30.868102   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.868112   40910 round_trippers.go:580]     Audit-Id: 5cccd51f-69ec-4732-a082-baf813bc949a
	I0916 10:49:30.868118   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.868121   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.868125   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.868129   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.868134   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.868283   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"405","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8976 chars]
	I0916 10:49:30.868749   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:30.868766   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:30.868776   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:30.868783   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:30.870511   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:30.870536   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:30.870548   40910 round_trippers.go:580]     Audit-Id: 3de4db8a-a9d7-48e5-8743-b3320d951d93
	I0916 10:49:30.870554   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:30.870559   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:30.870566   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:30.870574   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:30.870579   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:30 GMT
	I0916 10:49:30.870769   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.366254   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2
	I0916 10:49:31.366282   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.366292   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.366297   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.368446   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.368461   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.368467   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.368471   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.368474   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.368477   40910 round_trippers.go:580]     Audit-Id: 2e3cdb46-3866-4292-9b40-16416c47d3db
	I0916 10:49:31.368482   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.368484   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.368626   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-ubuntu-20-agent-2","namespace":"kube-system","uid":"d9fac362-fee0-4ee4-9a06-22b343085d2d","resourceVersion":"489","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"10.138.0.48:8441","kubernetes.io/config.hash":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.mirror":"a5ababb2af12b481e591ddfe93ae3b1f","kubernetes.io/config.seen":"2024-09-16T10:48:45.043155406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8732 chars]
	I0916 10:49:31.369064   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.369077   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.369083   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.369086   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.370882   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.370897   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.370902   40910 round_trippers.go:580]     Audit-Id: cb46d7f4-7e31-4e5f-af35-de6fc33b39d0
	I0916 10:49:31.370906   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.370912   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.370917   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.370921   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.370927   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.371093   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.371463   40910 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.371482   40910 pod_ready.go:82] duration metric: took 3.506229892s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.371495   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.371548   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2
	I0916 10:49:31.371558   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.371567   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.371572   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.373131   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.373152   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.373161   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.373167   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.373171   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.373176   40910 round_trippers.go:580]     Audit-Id: 8c1b2bc3-4321-46dd-a9e8-a793bd0581e6
	I0916 10:49:31.373180   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.373184   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.373370   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-ubuntu-20-agent-2","namespace":"kube-system","uid":"45d39430-8de5-404d-a2b8-bbf47738a4c7","resourceVersion":"478","creationTimestamp":"2024-09-16T10:48:49Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ccbff5351fb3e01bcec8c471c38698f0","kubernetes.io/config.mirror":"ccbff5351fb3e01bcec8c471c38698f0","kubernetes.io/config.seen":"2024-09-16T10:48:45.043157142Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8310 chars]
	I0916 10:49:31.373911   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.373927   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.373936   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.373944   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.375335   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.375348   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.375353   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.375357   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.375361   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.375367   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.375372   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.375380   40910 round_trippers.go:580]     Audit-Id: 490b2e37-3f44-4b3d-b73f-edf84078751f
	I0916 10:49:31.375599   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.375961   40910 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.376016   40910 pod_ready.go:82] duration metric: took 4.501071ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.376032   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.376092   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5
	I0916 10:49:31.376105   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.376116   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.376126   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.377440   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.377451   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.377458   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.377464   40910 round_trippers.go:580]     Audit-Id: fac59a65-6a75-46f0-991d-a4f66597a838
	I0916 10:49:31.377469   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.377475   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.377480   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.377489   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.377594   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lt5f5","generateName":"kube-proxy-","namespace":"kube-system","uid":"2e01c31f-c798-45c0-98a2-ee94c3b9d631","resourceVersion":"400","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4b7ac346-9c76-4a4c-9bfa-9795be9bed9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4b7ac346-9c76-4a4c-9bfa-9795be9bed9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6391 chars]
	I0916 10:49:31.378004   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.378018   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.378024   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.378029   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.379340   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.379357   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.379366   40910 round_trippers.go:580]     Audit-Id: c70261f9-8004-4761-9b84-7c5500180ba3
	I0916 10:49:31.379372   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.379376   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.379382   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.379389   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.379393   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.379540   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.380019   40910 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.380035   40910 pod_ready.go:82] duration metric: took 3.995814ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.380043   40910 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.380091   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2
	I0916 10:49:31.380098   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.380106   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.380111   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.381438   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.381450   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.381458   40910 round_trippers.go:580]     Audit-Id: 5348a0ff-19c6-4754-9776-14f62783efc4
	I0916 10:49:31.381465   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.381473   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.381479   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.381485   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.381489   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.381556   40910 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-ubuntu-20-agent-2","namespace":"kube-system","uid":"a9041542-d7b5-4571-87c5-a6e9e4ecfd5e","resourceVersion":"480","creationTimestamp":"2024-09-16T10:48:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6de72559ec804c46642b9388a6a99321","kubernetes.io/config.mirror":"6de72559ec804c46642b9388a6a99321","kubernetes.io/config.seen":"2024-09-16T10:48:50.455155081Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5192 chars]
	I0916 10:49:31.381932   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes/ubuntu-20-agent-2
	I0916 10:49:31.381949   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.381955   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.381962   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.383268   40910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:49:31.383281   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.383287   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.383291   40910 round_trippers.go:580]     Audit-Id: c3ab90c6-6b9c-4fb1-aaf1-51037b21396f
	I0916 10:49:31.383294   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.383297   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.383301   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.383303   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.383501   40910 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersio
n":"v1","time":"2024-09-16T10:48:47Z","fieldsType":"FieldsV1","fieldsV1 [truncated 8370 chars]
	I0916 10:49:31.383914   40910 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:31.383930   40910 pod_ready.go:82] duration metric: took 3.881215ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:31.383943   40910 pod_ready.go:39] duration metric: took 13.541849297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:31.383965   40910 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:49:31.384035   40910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:31.402292   40910 api_server.go:72] duration metric: took 16.523814653s to wait for apiserver process to appear ...
	I0916 10:49:31.402310   40910 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:49:31.402332   40910 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:31.405679   40910 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:31.405747   40910 round_trippers.go:463] GET https://10.138.0.48:8441/version
	I0916 10:49:31.405757   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.405765   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.405770   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.406428   40910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:49:31.406442   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.406448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.406452   40910 round_trippers.go:580]     Audit-Id: c52d338f-b459-405a-9a62-36fe356eca72
	I0916 10:49:31.406456   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.406459   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.406462   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.406464   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.406468   40910 round_trippers.go:580]     Content-Length: 263
	I0916 10:49:31.406480   40910 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:49:31.406539   40910 api_server.go:141] control plane version: v1.31.1
	I0916 10:49:31.406552   40910 api_server.go:131] duration metric: took 4.238245ms to wait for apiserver health ...
	I0916 10:49:31.406559   40910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:49:31.406604   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.406611   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.406617   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.406620   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.408753   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.408768   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.408777   40910 round_trippers.go:580]     Audit-Id: ca00f77e-55b6-40d3-942d-caeba2f2b949
	I0916 10:49:31.408783   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.408787   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.408794   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.408800   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.408804   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.409173   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53224 chars]
	I0916 10:49:31.410747   40910 system_pods.go:59] 7 kube-system pods found
	I0916 10:49:31.410769   40910 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:49:31.410774   40910 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:49:31.410777   40910 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [d9fac362-fee0-4ee4-9a06-22b343085d2d] Running
	I0916 10:49:31.410781   40910 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:49:31.410785   40910 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:49:31.410788   40910 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:49:31.410793   40910 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:31.410799   40910 system_pods.go:74] duration metric: took 4.235295ms to wait for pod list to return data ...
	I0916 10:49:31.410806   40910 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:49:31.410859   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:49:31.410869   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.410876   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.410880   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.412925   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.412940   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.412948   40910 round_trippers.go:580]     Audit-Id: c38fc8ed-86c8-4b02-b744-7085955fb70a
	I0916 10:49:31.412955   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.412961   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.412965   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.412969   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.412975   40910 round_trippers.go:580]     Content-Length: 261
	I0916 10:49:31.412980   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.412995   40910 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9d76d48e-93f1-40f0-9e21-ae9ef2c7919a","resourceVersion":"293","creationTimestamp":"2024-09-16T10:48:55Z"}}]}
	I0916 10:49:31.413218   40910 default_sa.go:45] found service account: "default"
	I0916 10:49:31.413236   40910 default_sa.go:55] duration metric: took 2.424518ms for default service account to be created ...
	I0916 10:49:31.413244   40910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:49:31.566665   40910 request.go:632] Waited for 153.359422ms due to client-side throttling, not priority and fairness, request: GET:https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.566718   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/namespaces/kube-system/pods
	I0916 10:49:31.566723   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.566730   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.566735   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.569281   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.569304   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.569311   40910 round_trippers.go:580]     Audit-Id: 386ae252-9edd-4dbb-81ae-7c9910b78122
	I0916 10:49:31.569315   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.569318   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.569321   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.569323   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.569326   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.569892   40910 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9tmvq","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"64b157a7-a274-493f-ad2d-3eb841c345db","resourceVersion":"471","creationTimestamp":"2024-09-16T10:48:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:48:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cffd58e-c8dc-4cfd-8b67-e5140f3be02d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53224 chars]
	I0916 10:49:31.571521   40910 system_pods.go:86] 7 kube-system pods found
	I0916 10:49:31.571550   40910 system_pods.go:89] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:49:31.571556   40910 system_pods.go:89] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:49:31.571561   40910 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [d9fac362-fee0-4ee4-9a06-22b343085d2d] Running
	I0916 10:49:31.571566   40910 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:49:31.571570   40910 system_pods.go:89] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:49:31.571574   40910 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:49:31.571581   40910 system_pods.go:89] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:31.571591   40910 system_pods.go:126] duration metric: took 158.342376ms to wait for k8s-apps to be running ...
	I0916 10:49:31.571602   40910 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:49:31.571647   40910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:31.584366   40910 system_svc.go:56] duration metric: took 12.755896ms WaitForService to wait for kubelet
	I0916 10:49:31.584391   40910 kubeadm.go:582] duration metric: took 16.705915399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:31.584407   40910 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:49:31.766809   40910 request.go:632] Waited for 182.321668ms due to client-side throttling, not priority and fairness, request: GET:https://10.138.0.48:8441/api/v1/nodes
	I0916 10:49:31.766863   40910 round_trippers.go:463] GET https://10.138.0.48:8441/api/v1/nodes
	I0916 10:49:31.766868   40910 round_trippers.go:469] Request Headers:
	I0916 10:49:31.766875   40910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:31.766878   40910 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:31.769413   40910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:31.769431   40910 round_trippers.go:577] Response Headers:
	I0916 10:49:31.769438   40910 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ea95780c-522d-487a-b022-49137f921fba
	I0916 10:49:31.769442   40910 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f08bc553-100b-439b-a0f2-682f7fe0f0a1
	I0916 10:49:31.769448   40910 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:49:31 GMT
	I0916 10:49:31.769454   40910 round_trippers.go:580]     Audit-Id: 5050ce4a-e361-49e4-87da-8631e833fb0a
	I0916 10:49:31.769458   40910 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:49:31.769462   40910 round_trippers.go:580]     Content-Type: application/json
	I0916 10:49:31.769624   40910 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"ubuntu-20-agent-2","uid":"db867971-3aa1-4828-8795-6ee44b9af7fa","resourceVersion":"391","creationTimestamp":"2024-09-16T10:48:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"ubuntu-20-agent-2","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"minikube","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_48_51_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{
"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024 [truncated 8423 chars]
	I0916 10:49:31.770045   40910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:31.770069   40910 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:31.770080   40910 node_conditions.go:105] duration metric: took 185.6687ms to run NodePressure ...
	I0916 10:49:31.770090   40910 start.go:241] waiting for startup goroutines ...
	I0916 10:49:31.770097   40910 start.go:246] waiting for cluster config update ...
	I0916 10:49:31.770106   40910 start.go:255] writing updated cluster config ...
	I0916 10:49:31.770345   40910 exec_runner.go:51] Run: rm -f paused
	I0916 10:49:31.774603   40910 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:49:31.775891   40910 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:49:33 UTC. --
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:49:13 ubuntu-20-agent-2 cri-dockerd[39148]: time="2024-09-16T10:49:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b9df597ae39c417a09955b7152d786e4b3098b8c35431d4eda14b67a7326566\""
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: cri-docker.service: Succeeded.
	Sep 16 10:49:13 ubuntu-20-agent-2 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 16 10:49:14 ubuntu-20-agent-2 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Start docker client with request timeout 0s"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Loaded network plugin cni"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 16 10:49:14 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:14Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 16 10:49:14 ubuntu-20-agent-2 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dc3e2cee9ae5f57aadbc2aaceeb4eab6703250b588a22cbe45191fdfd498d95d/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28927fc2d6545e5de958c3a564755d6cc294c19270fbd681fecefdc67d9960c8/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b51e183b7b46cb84c0a36aeef87ab5db48a381bf69bd9789f03783caeb9979c6/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:15 ubuntu-20-agent-2 dockerd[41620]: time="2024-09-16T10:49:15.620133539Z" level=info msg="ignoring event" container=0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:19 ubuntu-20-agent-2 cri-dockerd[41970]: time="2024-09-16T10:49:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6af15c63a0094873696c63bdb5039e18197b9b2cabbc974c70cac80073df9cb5/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a45299c063bb1       c69fa2e9cbf5f       14 seconds ago      Running             coredns                   1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	0d522fc642e51       6e38f40d628db       18 seconds ago      Exited              storage-provisioner       2                   b51e183b7b46c       storage-provisioner
	ff9c282d39039       2e96e5913fc06       18 seconds ago      Running             etcd                      1                   ad166eb13016a       etcd-ubuntu-20-agent-2
	552dd24d3b02d       60c005f310ff3       18 seconds ago      Running             kube-proxy                1                   dc3e2cee9ae5f       kube-proxy-lt5f5
	67e355cfcbda0       6bab7719df100       18 seconds ago      Running             kube-apiserver            1                   28927fc2d6545       kube-apiserver-ubuntu-20-agent-2
	bd9bbeacd72df       9aa1fad941575       18 seconds ago      Running             kube-scheduler            1                   59ae2583e1f56       kube-scheduler-ubuntu-20-agent-2
	76c209608f0b3       175ffd71cce3d       18 seconds ago      Running             kube-controller-manager   1                   317985ddf47a1       kube-controller-manager-ubuntu-20-agent-2
	458949ce6fd13       c69fa2e9cbf5f       37 seconds ago      Exited              coredns                   0                   6b9df597ae39c       coredns-7c65d6cfc9-9tmvq
	5d4b6365fb999       60c005f310ff3       37 seconds ago      Exited              kube-proxy                0                   dc4e1eb7881a9       kube-proxy-lt5f5
	8949fc35206b3       2e96e5913fc06       47 seconds ago      Exited              etcd                      0                   33693827aa1af       etcd-ubuntu-20-agent-2
	8b95544e0ae0c       9aa1fad941575       47 seconds ago      Exited              kube-scheduler            0                   75baf2b9ae9f6       kube-scheduler-ubuntu-20-agent-2
	a84496f2946e5       6bab7719df100       47 seconds ago      Exited              kube-apiserver            0                   a1b484ea8be60       kube-apiserver-ubuntu-20-agent-2
	043a8354243a6       175ffd71cce3d       47 seconds ago      Exited              kube-controller-manager   0                   cb842334bb4ef       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [458949ce6fd1] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44547 - 2953 "HINFO IN 7152552342506087924.8521799898990137584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018204297s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:49:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:49:00 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     38s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         44s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 37s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 43s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 43s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  43s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  43s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           39s   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           12s   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 08 df 66 25 46 08 06
	[  +4.924530] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	
	
	==> etcd [8949fc35206b] <==
	{"level":"info","ts":"2024-09-16T10:48:47.120366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:48:47.120375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:48:47.121315Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.121526Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:48:47.121550Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:48:47.121531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:48:47.121866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:48:47.121923Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:48:47.121993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122082Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:48:47.122675Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:48:47.122722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:48:47.123483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:48:47.123950Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:02.638546Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:49:02.638610Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ubuntu-20-agent-2","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"]}
	{"level":"warn","ts":"2024-09-16T10:49:02.638703Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 10.138.0.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.638776Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 10.138.0.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.640558Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:49:02.640658Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:49:02.664428Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b435b960bec7c3c","current-leader-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:02.666169Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:02.666259Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:02.666270Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ubuntu-20-agent-2","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"]}
	
	
	==> etcd [ff9c282d3903] <==
	{"level":"info","ts":"2024-09-16T10:49:15.690895Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:15.691581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:15.691652Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:15.692508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:15.694942Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:15.695055Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:15.695077Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:15.695182Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:15.695210Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:16.982566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:49:16.982673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.982695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:16.985345Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:16.985369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:16.985345Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:16.985594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:16.985619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:16.986983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:16.987215Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:16.988059Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:16.988378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:49:33 up 32 min,  0 users,  load average: 0.94, 0.45, 0.27
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [67e355cfcbda] <==
	I0916 10:49:17.820446       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0916 10:49:17.820454       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:49:17.819971       1 controller.go:119] Starting legacy_token_tracking_controller
	I0916 10:49:17.820471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0916 10:49:17.920209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:49:17.920325       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:49:17.920379       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:17.920500       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:17.920513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:17.920635       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:17.920648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:17.920636       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:17.920690       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:17.920701       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:17.920707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:17.920715       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:17.929865       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0916 10:49:17.930228       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:17.933615       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:49:17.936733       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:49:17.936760       1 policy_source.go:224] refreshing policies
	I0916 10:49:17.942076       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:18.823613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:21.505222       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:49:21.555277       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a84496f2946e] <==
	W0916 10:49:11.889275       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:11.971366       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:11.983065       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.025617       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.041054       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.069465       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.100121       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.120910       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.155119       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.171966       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.236000       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.307049       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.318425       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.344361       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.345630       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.357221       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.358492       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.364943       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.376569       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.433424       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.472392       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.509051       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.541793       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.635078       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:12.653468       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [043a8354243a] <==
	I0916 10:48:54.721782       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:48:54.721795       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:48:54.721803       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:48:54.727948       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ubuntu-20-agent-2" podCIDRs=["10.244.0.0/24"]
	I0916 10:48:54.727974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:54.728102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:54.769897       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:48:54.867735       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:48:54.917646       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:48:54.922999       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:48:54.923061       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:48:54.927848       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:48:55.337805       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:48:55.366499       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:48:55.366531       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:48:55.480435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:48:55.843059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="417.548288ms"
	I0916 10:48:55.852090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.982962ms"
	I0916 10:48:55.855974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.841817ms"
	I0916 10:48:55.856069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.897µs"
	I0916 10:48:56.545846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="194.1µs"
	I0916 10:48:57.582160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.052µs"
	I0916 10:48:57.587309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.871µs"
	I0916 10:48:57.590430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.694µs"
	I0916 10:49:00.899110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-controller-manager [76c209608f0b] <==
	I0916 10:49:21.202394       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 10:49:21.202575       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:49:21.202574       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:49:21.207037       1 shared_informer.go:320] Caches are synced for node
	I0916 10:49:21.207105       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:49:21.207106       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:21.207156       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:49:21.207161       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:49:21.207168       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:49:21.207222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:49:21.207642       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:21.209151       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:49:21.210332       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 10:49:21.210560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="29.215646ms"
	I0916 10:49:21.210735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.532µs"
	I0916 10:49:21.212573       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:49:21.252382       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:49:21.283055       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0916 10:49:21.391324       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:49:21.402727       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:49:21.406995       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:49:21.452298       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:49:21.822159       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:21.855859       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:21.855886       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [552dd24d3b02] <==
	I0916 10:49:15.706406       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:17.853578       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:17.853659       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:17.900242       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:17.900311       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:17.903531       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:17.903908       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:17.903945       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:17.905542       1 config.go:328] "Starting node config controller"
	I0916 10:49:17.905565       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:17.905768       1 config.go:199] "Starting service config controller"
	I0916 10:49:17.905783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:17.905828       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:17.906166       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:18.006137       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:18.006194       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:18.007364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5d4b6365fb99] <==
	I0916 10:48:56.237395       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:48:56.327177       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:48:56.327237       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:48:56.348155       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:48:56.348239       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:48:56.350670       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:48:56.351104       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:48:56.351137       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:48:56.352578       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:48:56.352619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:48:56.352651       1 config.go:199] "Starting service config controller"
	I0916 10:48:56.352661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:48:56.353009       1 config.go:328] "Starting node config controller"
	I0916 10:48:56.353023       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:48:56.452809       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:48:56.452812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:48:56.453083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8b95544e0ae0] <==
	E0916 10:48:47.999867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:47.999760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:48:47.999898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.817391       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:48:48.817439       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:48:48.932232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:48.932274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.969626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:48:48.969664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:48.976089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:48:48.976142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.046101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:48:49.046157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.072535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.072575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.117363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.117402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.119092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:48:49.119120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:48:49.152686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:48:49.152732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:48:50.595595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:02.635938       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 10:49:02.636080       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 10:49:02.636275       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bd9bbeacd72d] <==
	I0916 10:49:15.971969       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:17.842625       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:17.842666       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:49:17.842682       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:17.842691       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:17.875058       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:17.875385       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:17.878777       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:17.878838       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:17.878894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:17.878920       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:17.979658       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:49:33 UTC. --
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.175494   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.179369   40049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180073   40049 status_manager.go:851] "Failed to get status for pod" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180388   40049 status_manager.go:851] "Failed to get status for pod" podUID="2e01c31f-c798-45c0-98a2-ee94c3b9d631" pod="kube-system/kube-proxy-lt5f5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180684   40049 status_manager.go:851] "Failed to get status for pod" podUID="64b157a7-a274-493f-ad2d-3eb841c345db" pod="kube-system/coredns-7c65d6cfc9-9tmvq" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.180906   40049 status_manager.go:851] "Failed to get status for pod" podUID="a5ababb2af12b481e591ddfe93ae3b1f" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181122   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181407   40049 status_manager.go:851] "Failed to get status for pod" podUID="5b137b06bdfaed6743b655439322dfe0" pod="kube-system/etcd-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.181670   40049 status_manager.go:851] "Failed to get status for pod" podUID="ccbff5351fb3e01bcec8c471c38698f0" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.191939   40049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d1d58f49444d76811be9a80b2bfc8ab683f3b2f0db60a7ce1a40530a024e6e"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.192925   40049 status_manager.go:851] "Failed to get status for pod" podUID="5b137b06bdfaed6743b655439322dfe0" pod="kube-system/etcd-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.193316   40049 status_manager.go:851] "Failed to get status for pod" podUID="ccbff5351fb3e01bcec8c471c38698f0" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.193623   40049 status_manager.go:851] "Failed to get status for pod" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.194329   40049 status_manager.go:851] "Failed to get status for pod" podUID="2e01c31f-c798-45c0-98a2-ee94c3b9d631" pod="kube-system/kube-proxy-lt5f5" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lt5f5\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.194705   40049 status_manager.go:851] "Failed to get status for pod" podUID="64b157a7-a274-493f-ad2d-3eb841c345db" pod="kube-system/coredns-7c65d6cfc9-9tmvq" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9tmvq\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.195033   40049 status_manager.go:851] "Failed to get status for pod" podUID="a5ababb2af12b481e591ddfe93ae3b1f" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:15 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:15.195349   40049 status_manager.go:851] "Failed to get status for pod" podUID="6de72559ec804c46642b9388a6a99321" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.219023   40049 scope.go:117] "RemoveContainer" containerID="2d84812a1876e909acb666fe34bc9157c82cec862fdaf46f48e283ad4b6e3073"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.219416   40049 scope.go:117] "RemoveContainer" containerID="0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:16.219612   40049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dfe4a726-3764-4daf-a322-8f33ae3528f7)\"" pod="kube-system/storage-provisioner" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7"
	Sep 16 10:49:16 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:16.230467   40049 scope.go:117] "RemoveContainer" containerID="ca797a7433e09b256591c0abd395d30383489ab3e33095f655f88ed7ba38bed7"
	Sep 16 10:49:17 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:17.836079   40049 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 16 10:49:17 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:17.838190   40049 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 16 10:49:29 ubuntu-20-agent-2 kubelet[40049]: I0916 10:49:29.473542   40049 scope.go:117] "RemoveContainer" containerID="0d522fc642e51982c70238dfb6f58169923c1becb405bcb2e6462dabf54cf54d"
	Sep 16 10:49:29 ubuntu-20-agent-2 kubelet[40049]: E0916 10:49:29.473730   40049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dfe4a726-3764-4daf-a322-8f33ae3528f7)\"" pod="kube-system/storage-provisioner" podUID="dfe4a726-3764-4daf-a322-8f33ae3528f7"
	
	
	==> storage-provisioner [0d522fc642e5] <==
	I0916 10:49:15.582187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:49:15.584859       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (435.982µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubectlGetPods (1.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json: fork/exec /usr/local/bin/kubectl: exec format error (430.011µs)
functional_test.go:812: failed to get components. args "kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p minikube                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| addons  | disable dashboard -p minikube                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	| start   | -p minikube --wait=true                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                          |          |         |         |                     |                     |
	|         | --addons=registry                                                        |          |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |          |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |          |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |          |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm                                     |          |         |         |                     |                     |
	|         | --addons=helm-tiller                                                     |          |         |         |                     |                     |
	| ip      | minikube ip                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	| addons  | minikube addons disable                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | registry --alsologtostderr                                               |          |         |         |                     |                     |
	|         | -v=1                                                                     |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:30 UTC | 16 Sep 24 10:30 UTC |
	|         | minikube                                                                 |          |         |         |                     |                     |
	| addons  | minikube addons                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server                                                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| addons  | minikube addons disable                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|         | helm-tiller --alsologtostderr                                            |          |         |         |                     |                     |
	|         | -v=1                                                                     |          |         |         |                     |                     |
	| addons  | enable headlamp -p minikube                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| addons  | minikube addons disable                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | headlamp --alsologtostderr                                               |          |         |         |                     |                     |
	|         | -v=1                                                                     |          |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | minikube                                                                 |          |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | -p minikube                                                              |          |         |         |                     |                     |
	| addons  | minikube addons disable yakd                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| stop    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | enable dashboard -p minikube                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable dashboard -p minikube                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons  | disable gvisor -p minikube                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| delete  | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start   | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|         | --cert-expiration=3m                                                     |          |         |         |                     |                     |
	|         | --driver=none                                                            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start   | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|         | --cert-expiration=8760h                                                  |          |         |         |                     |                     |
	|         | --driver=none                                                            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| delete  | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start   | -p minikube --memory=4000                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|         | --apiserver-port=8441                                                    |          |         |         |                     |                     |
	|         | --wait=all --driver=none                                                 |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start   | -p minikube --alsologtostderr                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | -v=8                                                                     |          |         |         |                     |                     |
	| kubectl | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | minikube get pods                                                        |          |         |         |                     |                     |
	| start   | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|         | --wait=all                                                               |          |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:49:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:49:34.367697   44102 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:34.367796   44102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:34.367800   44102 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:34.367803   44102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:34.368000   44102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:49:34.368576   44102 out.go:352] Setting JSON to false
	I0916 10:49:34.369762   44102 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1925,"bootTime":1726481849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:49:34.369864   44102 start.go:139] virtualization: kvm guest
	I0916 10:49:34.372207   44102 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:49:34.373528   44102 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:49:34.373549   44102 notify.go:220] Checking for updates...
	I0916 10:49:34.373596   44102 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:49:34.375134   44102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:49:34.376456   44102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:34.377827   44102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:49:34.379067   44102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:49:34.380215   44102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:49:34.381830   44102 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:49:34.381903   44102 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:49:34.382258   44102 exec_runner.go:51] Run: systemctl --version
	I0916 10:49:34.394119   44102 out.go:177] * Using the none driver based on existing profile
	I0916 10:49:34.395240   44102 start.go:297] selected driver: none
	I0916 10:49:34.395245   44102 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:34.395334   44102 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:49:34.395356   44102 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:49:34.396402   44102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:34.396426   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:34.396480   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:34.396521   44102 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:34.398149   44102 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:49:34.399432   44102 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:49:34.399661   44102 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:49:34.399737   44102 start.go:364] duration metric: took 42.937µs to acquireMachinesLock for "minikube"
	I0916 10:49:34.399751   44102 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:34.399756   44102 fix.go:54] fixHost starting: 
	I0916 10:49:34.400566   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:34.400578   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:49:34.400609   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:34.417610   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/42804/cgroup
	I0916 10:49:34.427297   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74"
	I0916 10:49:34.427349   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74/freezer.state
	I0916 10:49:34.435135   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:49:34.435172   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:34.438855   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:34.438872   44102 fix.go:112] recreateIfNeeded on minikube: state=Running err=<nil>
	W0916 10:49:34.438883   44102 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:34.440833   44102 out.go:177] * Updating the running none "minikube" bare metal machine ...
	I0916 10:49:34.442224   44102 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:49:34.442294   44102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:34.442340   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:34.450172   44102 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:49:34.450193   44102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:49:34.450204   44102 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:49:34.451555   44102 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:49:34.452660   44102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:49:34.452717   44102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:49:34.452824   44102 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:49:34.452922   44102 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> hosts in /etc/test/nested/copy/11057
	I0916 10:49:34.452966   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11057
	I0916 10:49:34.460722   44102 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:49:34.460735   44102 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:49:34.460775   44102 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:49:34.468401   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:34.468524   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1744777790 /etc/ssl/certs/110572.pem
	I0916 10:49:34.476682   44102 exec_runner.go:144] found /etc/test/nested/copy/11057/hosts, removing ...
	I0916 10:49:34.476689   44102 exec_runner.go:203] rm: /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.476722   44102 exec_runner.go:51] Run: sudo rm -f /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.484139   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts --> /etc/test/nested/copy/11057/hosts (40 bytes)
	I0916 10:49:34.484250   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1533693540 /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.491824   44102 start.go:296] duration metric: took 49.589157ms for postStartSetup
	I0916 10:49:34.491834   44102 fix.go:56] duration metric: took 92.078988ms for fixHost
	I0916 10:49:34.491838   44102 start.go:83] releasing machines lock for "minikube", held for 92.094707ms
	I0916 10:49:34.492251   44102 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:49:34.492337   44102 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:49:34.494437   44102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:34.494490   44102 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:34.502745   44102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:34.502761   44102 start.go:495] detecting cgroup driver to use...
	I0916 10:49:34.502779   44102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:34.502870   44102 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:34.518995   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:49:34.527877   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:49:34.537931   44102 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:49:34.537970   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:49:34.546628   44102 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:34.556292   44102 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:49:34.564848   44102 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:34.573674   44102 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:34.581850   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:49:34.590699   44102 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:49:34.599809   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:49:34.608239   44102 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:34.615665   44102 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:34.622351   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:34.853534   44102 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:49:35.007953   44102 start.go:495] detecting cgroup driver to use...
	I0916 10:49:35.007987   44102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:35.008165   44102 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:35.030145   44102 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:49:35.031044   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:49:35.038626   44102 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:49:35.038639   44102 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.038674   44102 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.045787   44102 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:49:35.045915   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube470809814 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.053329   44102 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:49:35.282790   44102 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:49:35.519083   44102 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:49:35.519230   44102 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:49:35.519237   44102 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:49:35.519277   44102 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:49:35.527684   44102 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:49:35.527806   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2137543206 /etc/docker/daemon.json
	I0916 10:49:35.535734   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:35.767198   44102 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:49:46.284206   44102 exec_runner.go:84] Completed: sudo systemctl restart docker: (10.516973976s)
	I0916 10:49:46.284268   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:49:46.300861   44102 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:49:46.328981   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:46.342597   44102 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:49:46.549632   44102 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:49:46.763075   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:46.984012   44102 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:49:46.998648   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:47.011609   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:47.228695   44102 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:49:47.296032   44102 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:49:47.296087   44102 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:49:47.297517   44102 start.go:563] Will wait 60s for crictl version
	I0916 10:49:47.297559   44102 exec_runner.go:51] Run: which crictl
	I0916 10:49:47.298467   44102 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:49:47.327405   44102 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:49:47.327452   44102 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:47.352845   44102 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:47.386887   44102 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:49:47.387006   44102 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:47.390512   44102 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:49:47.392256   44102 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:49:47.394342   44102 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:47.394545   44102 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:49:47.394561   44102 kubeadm.go:934] updating node { 10.138.0.48 8441 v1.31.1 docker true true} ...
	I0916 10:49:47.394671   44102 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:49:47.394731   44102 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:49:47.491260   44102 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:49:47.491348   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:47.491370   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:47.491382   44102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:47.491411   44102 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containe
rRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:47.491618   44102 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:47.491688   44102 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:47.505358   44102 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:47.505403   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:49:47.520006   44102 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:49:47.520019   44102 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.520050   44102 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.530854   44102 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:49:47.531003   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2144099678 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.542188   44102 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:49:47.542200   44102 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:49:47.542239   44102 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:49:47.551571   44102 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:47.551736   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3534866455 /lib/systemd/system/kubelet.service
	I0916 10:49:47.564693   44102 exec_runner.go:144] found /var/tmp/minikube/kubeadm.yaml.new, removing ...
	I0916 10:49:47.564709   44102 exec_runner.go:203] rm: /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.564753   44102 exec_runner.go:51] Run: sudo rm -f /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.579911   44102 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2006 bytes)
	I0916 10:49:47.580073   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube844047420 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.590241   44102 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:47.591672   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:47.877109   44102 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:47.891401   44102 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:49:47.891417   44102 certs.go:194] generating shared ca certs ...
	I0916 10:49:47.891435   44102 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:47.891564   44102 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:49:47.891613   44102 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:49:47.891618   44102 certs.go:256] generating profile certs ...
	I0916 10:49:47.891686   44102 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:49:47.891720   44102 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:49:47.891748   44102 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:49:47.891839   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem (1338 bytes)
	W0916 10:49:47.891860   44102 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:47.891866   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:49:47.891886   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:49:47.891903   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:47.891920   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:47.891952   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:47.892465   44102 exec_runner.go:144] found /var/lib/minikube/certs/ca.crt, removing ...
	I0916 10:49:47.892473   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.892502   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.900967   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:47.901138   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1210378531 /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.912023   44102 exec_runner.go:144] found /var/lib/minikube/certs/ca.key, removing ...
	I0916 10:49:47.912037   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.key
	I0916 10:49:47.912078   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.key
	I0916 10:49:47.921462   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:49:47.921578   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube992492122 /var/lib/minikube/certs/ca.key
	I0916 10:49:47.930532   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.crt, removing ...
	I0916 10:49:47.930544   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.930574   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.941547   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:47.941735   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3597742140 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.949959   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.key, removing ...
	I0916 10:49:47.949972   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.950013   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.958535   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:47.958720   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube39691256 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.968590   44102 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.crt, removing ...
	I0916 10:49:47.968603   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.968639   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.979089   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:49:47.979255   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3184187309 /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.992482   44102 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.key, removing ...
	I0916 10:49:47.992493   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.key
	I0916 10:49:47.992527   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.key
	I0916 10:49:48.004500   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:48.004654   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1788991639 /var/lib/minikube/certs/apiserver.key
	I0916 10:49:48.014833   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.crt, removing ...
	I0916 10:49:48.014847   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.014899   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.023719   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:48.023836   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1735539355 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.031596   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.key, removing ...
	I0916 10:49:48.031607   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.031636   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.040493   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:49:48.040612   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1102660184 /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.048037   44102 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:49:48.048046   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.048082   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.055311   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:48.055454   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284381175 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.062797   44102 exec_runner.go:144] found /usr/share/ca-certificates/11057.pem, removing ...
	I0916 10:49:48.062806   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.062832   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.070851   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem --> /usr/share/ca-certificates/11057.pem (1338 bytes)
	I0916 10:49:48.070962   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1901239915 /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.078323   44102 exec_runner.go:144] found /usr/share/ca-certificates/110572.pem, removing ...
	I0916 10:49:48.078331   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.078357   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.085407   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /usr/share/ca-certificates/110572.pem (1708 bytes)
	I0916 10:49:48.085507   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3710384688 /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.093097   44102 exec_runner.go:144] found /var/lib/minikube/kubeconfig, removing ...
	I0916 10:49:48.093105   44102 exec_runner.go:203] rm: /var/lib/minikube/kubeconfig
	I0916 10:49:48.093131   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/kubeconfig
	I0916 10:49:48.100378   44102 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:48.100504   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2672007197 /var/lib/minikube/kubeconfig
	I0916 10:49:48.107945   44102 exec_runner.go:51] Run: openssl version
	I0916 10:49:48.110668   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:48.118824   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.120087   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.120116   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.122882   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:48.131516   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11057.pem && ln -fs /usr/share/ca-certificates/11057.pem /etc/ssl/certs/11057.pem"
	I0916 10:49:48.139894   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.141127   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1338 Sep 16 10:49 /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.141159   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.143998   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11057.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:48.151314   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110572.pem && ln -fs /usr/share/ca-certificates/110572.pem /etc/ssl/certs/110572.pem"
	I0916 10:49:48.160395   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.161633   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1708 Sep 16 10:49 /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.161664   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.164355   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110572.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:48.172002   44102 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:48.173296   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:48.176039   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:48.178835   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:48.181448   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:48.184066   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:48.186560   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:48.189114   44102 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:48.189216   44102 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:48.205373   44102 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:49:48.213537   44102 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:49:48.213544   44102 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:49:48.213575   44102 exec_runner.go:51] Run: sudo test -d /data/minikube
	I0916 10:49:48.220853   44102 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.221144   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:48.222201   44102 exec_runner.go:51] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:48.229505   44102 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-09-16 10:48:41.770801188 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-09-16 10:49:47.577025778 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0916 10:49:48.229512   44102 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:49:48.229546   44102 exec_runner.go:51] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:48.252689   44102 docker.go:483] Stopping containers: [4c8dc9f7334c 3c1686a3f081 d36cca85a0cf 89edf012e73d 4045e763ce4d 01deb4e9cb0c 733fde545b97 8dde68f011d3 cddf26022ee7 4cc6aa8bc7d5 7b5dd454fcc4 13ae9078b412 b80696d65d3f a45299c063bb 6af15c63a009 0d522fc642e5 ff9c282d3903 552dd24d3b02 67e355cfcbda bd9bbeacd72d 76c209608f0b dc3e2cee9ae5 28927fc2d654 ad166eb13016 317985ddf47a 59ae2583e1f5 b51e183b7b46 a8e886cfa378 60d1d58f4944 6b9df597ae39 dc4e1eb7881a 5b34f2349a51 a1b484ea8be6 75baf2b9ae9f cb842334bb4e 33693827aa1a]
	I0916 10:49:48.252753   44102 exec_runner.go:51] Run: docker stop 4c8dc9f7334c 3c1686a3f081 d36cca85a0cf 89edf012e73d 4045e763ce4d 01deb4e9cb0c 733fde545b97 8dde68f011d3 cddf26022ee7 4cc6aa8bc7d5 7b5dd454fcc4 13ae9078b412 b80696d65d3f a45299c063bb 6af15c63a009 0d522fc642e5 ff9c282d3903 552dd24d3b02 67e355cfcbda bd9bbeacd72d 76c209608f0b dc3e2cee9ae5 28927fc2d654 ad166eb13016 317985ddf47a 59ae2583e1f5 b51e183b7b46 a8e886cfa378 60d1d58f4944 6b9df597ae39 dc4e1eb7881a 5b34f2349a51 a1b484ea8be6 75baf2b9ae9f cb842334bb4e 33693827aa1a
	I0916 10:49:48.442643   44102 exec_runner.go:51] Run: sudo systemctl stop kubelet
	I0916 10:49:48.560784   44102 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:49:48.569413   44102 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 16 10:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Sep 16 10:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 16 10:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5599 Sep 16 10:48 /etc/kubernetes/scheduler.conf
	
	I0916 10:49:48.569461   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0916 10:49:48.577497   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0916 10:49:48.585125   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0916 10:49:48.593954   44102 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.593994   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:49:48.601757   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0916 10:49:48.609635   44102 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.609678   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:49:48.617071   44102 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:49:48.625228   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:48.665763   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:49.684392   44102 exec_runner.go:84] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.01860289s)
	I0916 10:49:49.684410   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:49.970734   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:50.016697   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:50.077897   44102 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:49:50.077968   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:50.578901   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:51.078602   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:51.092825   44102 api_server.go:72] duration metric: took 1.014927236s to wait for apiserver process to appear ...
	I0916 10:49:51.092842   44102 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:49:51.092863   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.483754   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 10:49:53.483770   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 10:49:53.483783   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.521507   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:53.521527   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:53.593683   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.597734   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:53.597754   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:54.093924   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:54.097428   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:54.097446   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:54.593822   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:54.601916   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:54.601932   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:55.093540   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:55.097489   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:55.102727   44102 api_server.go:141] control plane version: v1.31.1
	I0916 10:49:55.102741   44102 api_server.go:131] duration metric: took 4.009894582s to wait for apiserver health ...
	I0916 10:49:55.102748   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:55.102757   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:55.104363   44102 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:49:55.105580   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:49:55.115275   44102 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:49:55.115383   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube160449351 /etc/cni/net.d/1-k8s.conflist
	I0916 10:49:55.124322   44102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:49:55.133278   44102 system_pods.go:59] 7 kube-system pods found
	I0916 10:49:55.133294   44102 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:49:55.133299   44102 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:49:55.133305   44102 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:49:55.133310   44102 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:49:55.133314   44102 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 10:49:55.133318   44102 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 10:49:55.133322   44102 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:55.133327   44102 system_pods.go:74] duration metric: took 8.997814ms to wait for pod list to return data ...
	I0916 10:49:55.133332   44102 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:49:55.136280   44102 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:55.136297   44102 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:55.136306   44102 node_conditions.go:105] duration metric: took 2.970939ms to run NodePressure ...
	I0916 10:49:55.136319   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:55.378848   44102 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 10:49:55.382416   44102 kubeadm.go:739] kubelet initialised
	I0916 10:49:55.382425   44102 kubeadm.go:740] duration metric: took 3.564162ms waiting for restarted kubelet to initialise ...
	I0916 10:49:55.382430   44102 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:55.386974   44102 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:57.392689   44102 pod_ready.go:103] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:57.892929   44102 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:57.892941   44102 pod_ready.go:82] duration metric: took 2.505952837s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:57.892948   44102 pod_ready.go:79] waiting up to 4m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:59.898724   44102 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:02.398645   44102 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:02.398655   44102 pod_ready.go:82] duration metric: took 4.505702789s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:02.398664   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:04.403969   44102 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:06.404601   44102 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.404611   44102 pod_ready.go:82] duration metric: took 4.005942832s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.404619   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.409868   44102 pod_ready.go:103] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:08.910387   44102 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.910398   44102 pod_ready.go:82] duration metric: took 2.505774179s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.910405   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.914996   44102 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.915009   44102 pod_ready.go:82] duration metric: took 4.598106ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.915019   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.919034   44102 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.919042   44102 pod_ready.go:82] duration metric: took 4.017487ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.919050   44102 pod_ready.go:39] duration metric: took 13.536612391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:08.919069   44102 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:50:08.928241   44102 ops.go:34] apiserver oom_adj: -16
	I0916 10:50:08.928249   44102 kubeadm.go:597] duration metric: took 20.714700355s to restartPrimaryControlPlane
	I0916 10:50:08.928254   44102 kubeadm.go:394] duration metric: took 20.73914576s to StartCluster
	I0916 10:50:08.928267   44102 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:08.928326   44102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:08.928829   44102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:08.929108   44102 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:50:08.929178   44102 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:50:08.929190   44102 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	W0916 10:50:08.929195   44102 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:50:08.929198   44102 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:50:08.929214   44102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:50:08.929217   44102 host.go:66] Checking if "minikube" exists ...
	I0916 10:50:08.929232   44102 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:08.929617   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.929625   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.929651   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.929686   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.929694   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.929746   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.931620   44102 out.go:177] * Configuring local host environment ...
	W0916 10:50:08.933142   44102 out.go:270] * 
	W0916 10:50:08.933162   44102 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:50:08.933167   44102 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:50:08.933170   44102 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:50:08.933174   44102 out.go:270] * 
	W0916 10:50:08.933209   44102 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:50:08.933216   44102 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:50:08.933219   44102 out.go:270] * 
	W0916 10:50:08.933240   44102 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:50:08.933262   44102 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:50:08.933270   44102 out.go:270] * 
	W0916 10:50:08.933275   44102 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:50:08.933310   44102 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:50:08.934686   44102 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:08.936373   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:50:08.946768   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.948421   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.957203   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:08.957255   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:08.958773   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:08.958808   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:08.967420   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:08.967441   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:08.967696   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:08.967711   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:08.971852   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:08.972292   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:08.972953   44102 addons.go:234] Setting addon default-storageclass=true in "minikube"
	W0916 10:50:08.972961   44102 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:50:08.972979   44102 host.go:66] Checking if "minikube" exists ...
	I0916 10:50:08.973448   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.973455   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.973479   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.973984   44102 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:50:08.975365   44102 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.975381   44102 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:50:08.975386   44102 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.975417   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.983324   44102 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:50:08.983471   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1649761568 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.991981   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.994149   44102 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:09.003412   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:09.003491   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:09.014326   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:09.014349   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:09.018714   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:09.018754   44102 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.018771   44102 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:50:09.018778   44102 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.018822   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.038400   44102 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:50:09.038571   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2232686379 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.051205   44102 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.256034   44102 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:50:09.269235   44102 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:50:09.271890   44102 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:50:09.271899   44102 node_ready.go:38] duration metric: took 2.646283ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:50:09.271905   44102 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:09.276337   44102 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.280639   44102 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.280647   44102 pod_ready.go:82] duration metric: took 4.300934ms for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.280654   44102 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.308578   44102 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.308591   44102 pod_ready.go:82] duration metric: took 27.93217ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.308599   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.492600   44102 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:50:09.494021   44102 addons.go:510] duration metric: took 564.915064ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:50:09.708409   44102 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.708421   44102 pod_ready.go:82] duration metric: took 399.817476ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.708431   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.108446   44102 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.108456   44102 pod_ready.go:82] duration metric: took 400.019969ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.108466   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.508234   44102 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.508255   44102 pod_ready.go:82] duration metric: took 399.773468ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.508264   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.908192   44102 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.908203   44102 pod_ready.go:82] duration metric: took 399.935295ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.908212   44102 pod_ready.go:39] duration metric: took 1.636299031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:10.908227   44102 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:50:10.908289   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:10.922119   44102 api_server.go:72] duration metric: took 1.988780115s to wait for apiserver process to appear ...
	I0916 10:50:10.922134   44102 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:50:10.922153   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:10.925548   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:10.926399   44102 api_server.go:141] control plane version: v1.31.1
	I0916 10:50:10.926408   44102 api_server.go:131] duration metric: took 4.269595ms to wait for apiserver health ...
	I0916 10:50:10.926414   44102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:50:11.110513   44102 system_pods.go:59] 7 kube-system pods found
	I0916 10:50:11.110526   44102 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:50:11.110529   44102 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:50:11.110532   44102 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running
	I0916 10:50:11.110536   44102 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:50:11.110538   44102 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:50:11.110541   44102 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:50:11.110543   44102 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running
	I0916 10:50:11.110548   44102 system_pods.go:74] duration metric: took 184.129488ms to wait for pod list to return data ...
	I0916 10:50:11.110554   44102 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:50:11.308206   44102 default_sa.go:45] found service account: "default"
	I0916 10:50:11.308219   44102 default_sa.go:55] duration metric: took 197.660035ms for default service account to be created ...
	I0916 10:50:11.308225   44102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:50:11.510357   44102 system_pods.go:86] 7 kube-system pods found
	I0916 10:50:11.510371   44102 system_pods.go:89] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:50:11.510376   44102 system_pods.go:89] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:50:11.510379   44102 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running
	I0916 10:50:11.510382   44102 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:50:11.510385   44102 system_pods.go:89] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:50:11.510387   44102 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:50:11.510389   44102 system_pods.go:89] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running
	I0916 10:50:11.510395   44102 system_pods.go:126] duration metric: took 202.165936ms to wait for k8s-apps to be running ...
	I0916 10:50:11.510400   44102 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:50:11.510443   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:11.522234   44102 system_svc.go:56] duration metric: took 11.824388ms WaitForService to wait for kubelet
	I0916 10:50:11.522250   44102 kubeadm.go:582] duration metric: took 2.588917885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:50:11.522265   44102 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:50:11.708617   44102 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:11.708628   44102 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:11.708635   44102 node_conditions.go:105] duration metric: took 186.36639ms to run NodePressure ...
	I0916 10:50:11.708644   44102 start.go:241] waiting for startup goroutines ...
	I0916 10:50:11.708649   44102 start.go:246] waiting for cluster config update ...
	I0916 10:50:11.708658   44102 start.go:255] writing updated cluster config ...
	I0916 10:50:11.708906   44102 exec_runner.go:51] Run: rm -f paused
	I0916 10:50:11.712704   44102 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:50:11.713754   44102 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:12 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.322407990Z" level=info msg="ignoring event" container=4045e763ce4dddc298c49202c69abaaf53578349a14c6b66862ff52fa08e55c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.324228760Z" level=info msg="ignoring event" container=8dde68f011d3917709bd7d17c674d8d646454a449ab082472d867ccc43e33703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.327206842Z" level=info msg="ignoring event" container=cddf26022ee7468f6f5285ac9605b017ab7d59d05196a64ee72b6fb2c37a931d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.327260261Z" level=info msg="ignoring event" container=4c8dc9f7334c2a7afc6de182bab4178101d4c2627439740504f7e17f85dde35c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76cbcdfc11b3b       c69fa2e9cbf5f       18 seconds ago      Running             coredns                   2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db       18 seconds ago      Running             storage-provisioner       4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3       18 seconds ago      Running             kube-proxy                3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575       22 seconds ago      Running             kube-scheduler            3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06       22 seconds ago      Running             etcd                      3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d       22 seconds ago      Running             kube-controller-manager   3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100       22 seconds ago      Running             kube-apiserver            0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d       25 seconds ago      Exited              kube-controller-manager   2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575       25 seconds ago      Exited              kube-scheduler            2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3       25 seconds ago      Exited              kube-proxy                2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06       25 seconds ago      Exited              etcd                      2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db       26 seconds ago      Created             storage-provisioner       3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f       53 seconds ago      Exited              coredns                   1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	67e355cfcbda0       6bab7719df100       57 seconds ago      Exited              kube-apiserver            1                   28927fc2d6545       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     77s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         83s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientPID     82s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 82s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           78s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           51s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 22s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    22s (x7 over 22s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:50:12 up 32 min,  0 users,  load average: 0.92, 0.51, 0.30
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.567855       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:49:53.567900       1 policy_source.go:224] refreshing policies
	I0916 10:49:53.575115       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:49:53.575132       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [67e355cfcbda] <==
	W0916 10:49:45.070608       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.109161       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120779       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120899       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.134173       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.149767       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.185767       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.187044       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.304341       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.320994       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.344654       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.353348       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.380165       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.387448       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.409947       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.461534       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.512147       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.532416       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.603473       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.683743       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.694566       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.695882       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.698138       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.773255       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.792702       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:56.896159       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:49:56.896270       1 shared_informer.go:320] Caches are synced for job
	I0916 10:49:56.896299       1 shared_informer.go:320] Caches are synced for taint
	I0916 10:49:56.896392       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 10:49:56.896463       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ubuntu-20-agent-2"
	I0916 10:49:56.896504       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 10:49:56.898659       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:49:56.900608       1 shared_informer.go:320] Caches are synced for deployment
	I0916 10:49:56.905888       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:49:56.938304       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 10:49:56.940607       1 shared_informer.go:320] Caches are synced for expand
	I0916 10:49:56.946179       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 10:49:56.953441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.884995ms"
	I0916 10:49:56.953792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="164.068µs"
	I0916 10:49:56.997216       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:49:57.003785       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:49:57.017220       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 10:49:57.046362       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:49:57.053832       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:57.101503       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:57.463535       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:57.495994       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:57.496027       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:49:57.813840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70585ms"
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:12 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245766   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-usr-local-share-ca-certificates\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245792   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de72559ec804c46642b9388a6a99321-kubeconfig\") pod \"kube-scheduler-ubuntu-20-agent-2\" (UID: \"6de72559ec804c46642b9388a6a99321\") " pod="kube-system/kube-scheduler-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245818   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5b137b06bdfaed6743b655439322dfe0-etcd-data\") pod \"etcd-ubuntu-20-agent-2\" (UID: \"5b137b06bdfaed6743b655439322dfe0\") " pod="kube-system/etcd-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245853   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4642e2c137134acfd9b1b4b4e9aa2fbd-k8s-certs\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"4642e2c137134acfd9b1b4b4e9aa2fbd\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245887   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4642e2c137134acfd9b1b4b4e9aa2fbd-usr-local-share-ca-certificates\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"4642e2c137134acfd9b1b4b4e9aa2fbd\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.245915   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-ca-certs\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.430413   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: E0916 10:49:50.430915   46464 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 10.138.0.48:8441: connect: connection refused" node="ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: E0916 10:49:50.645746   46464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ubuntu-20-agent-2?timeout=10s\": dial tcp 10.138.0.48:8441: connect: connection refused" interval="800ms"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (429.962µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/ComponentHealth (1.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context minikube apply -f testdata/invalidsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (429.055µs)
functional_test.go:2323: kubectl --context minikube apply -f testdata/invalidsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/InvalidService (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] stderr:
I0916 10:50:14.754637   48808 out.go:345] Setting OutFile to fd 1 ...
I0916 10:50:14.754778   48808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:50:14.754787   48808 out.go:358] Setting ErrFile to fd 2...
I0916 10:50:14.754792   48808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:50:14.754965   48808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
I0916 10:50:14.755206   48808 mustload.go:65] Loading cluster: minikube
I0916 10:50:14.755546   48808 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:50:14.755801   48808 exec_runner.go:51] Run: systemctl --version
I0916 10:50:14.758561   48808 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
I0916 10:50:14.758590   48808 api_server.go:166] Checking apiserver status ...
I0916 10:50:14.758615   48808 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:50:14.772329   48808 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
I0916 10:50:14.781778   48808 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
I0916 10:50:14.781842   48808 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
I0916 10:50:14.790557   48808 api_server.go:204] freezer state: "THAWED"
I0916 10:50:14.790612   48808 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
I0916 10:50:14.794944   48808 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
ok
I0916 10:50:14.794964   48808 host.go:66] Checking if "minikube" exists ...
I0916 10:50:14.795164   48808 api_server.go:166] Checking apiserver status ...
I0916 10:50:14.795202   48808 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:50:14.809052   48808 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
I0916 10:50:14.817421   48808 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
I0916 10:50:14.817496   48808 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
I0916 10:50:14.826263   48808 api_server.go:204] freezer state: "THAWED"
I0916 10:50:14.826288   48808 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
I0916 10:50:14.829864   48808 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
ok
W0916 10:50:14.829908   48808 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0916 10:50:14.830065   48808 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:50:14.830084   48808 addons.go:69] Setting dashboard=true in profile "minikube"
I0916 10:50:14.830094   48808 addons.go:234] Setting addon dashboard=true in "minikube"
I0916 10:50:14.830119   48808 host.go:66] Checking if "minikube" exists ...
I0916 10:50:14.830594   48808 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
I0916 10:50:14.830610   48808 api_server.go:166] Checking apiserver status ...
I0916 10:50:14.830635   48808 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:50:14.843512   48808 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
I0916 10:50:14.851654   48808 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
I0916 10:50:14.851703   48808 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
I0916 10:50:14.859404   48808 api_server.go:204] freezer state: "THAWED"
I0916 10:50:14.859427   48808 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
I0916 10:50:14.863785   48808 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
ok
I0916 10:50:14.865933   48808 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0916 10:50:14.867302   48808 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0916 10:50:14.868574   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0916 10:50:14.868602   48808 exec_runner.go:151] cp: dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0916 10:50:14.868735   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1389707914 /etc/kubernetes/addons/dashboard-ns.yaml
I0916 10:50:14.876866   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0916 10:50:14.876890   48808 exec_runner.go:151] cp: dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0916 10:50:14.876996   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4010476864 /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0916 10:50:14.886612   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0916 10:50:14.886641   48808 exec_runner.go:151] cp: dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0916 10:50:14.886753   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1073782856 /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0916 10:50:14.894765   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0916 10:50:14.894794   48808 exec_runner.go:151] cp: dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0916 10:50:14.894915   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4066229876 /etc/kubernetes/addons/dashboard-configmap.yaml
I0916 10:50:14.902885   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0916 10:50:14.902915   48808 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0916 10:50:14.903041   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3610324906 /etc/kubernetes/addons/dashboard-dp.yaml
I0916 10:50:14.911659   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0916 10:50:14.911686   48808 exec_runner.go:151] cp: dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0916 10:50:14.911799   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2683194124 /etc/kubernetes/addons/dashboard-role.yaml
I0916 10:50:14.919838   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0916 10:50:14.919884   48808 exec_runner.go:151] cp: dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0916 10:50:14.920020   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2737267450 /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0916 10:50:14.928582   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0916 10:50:14.928615   48808 exec_runner.go:151] cp: dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0916 10:50:14.928736   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1498715194 /etc/kubernetes/addons/dashboard-sa.yaml
I0916 10:50:14.936626   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0916 10:50:14.936659   48808 exec_runner.go:151] cp: dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0916 10:50:14.936774   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube497842764 /etc/kubernetes/addons/dashboard-secret.yaml
I0916 10:50:14.944586   48808 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:50:14.944628   48808 exec_runner.go:151] cp: dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0916 10:50:14.944750   48808 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2186060378 /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:50:14.952939   48808 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:50:15.481033   48808 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube addons enable metrics-server

                                                
                                                
I0916 10:50:15.482285   48808 addons.go:197] Writing out "minikube" config to set dashboard=true...
W0916 10:50:15.482577   48808 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0916 10:50:15.483151   48808 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAge
nt:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0916 10:50:15.490291   48808 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  9f5fe9bf-3e98-4c59-9c8f-290626f78c4b 640 0 2024-09-16 10:50:15 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-09-16 10:50:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.78.248,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.78.248],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0916 10:50:15.490402   48808 out.go:270] * Launching proxy ...
* Launching proxy ...
I0916 10:50:15.490464   48808 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context minikube proxy --port 36195]
I0916 10:50:15.492582   48808 out.go:201] 
W0916 10:50:15.493859   48808 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
W0916 10:50:15.493871   48808 out.go:270] * 
* 
W0916 10:50:15.495544   48808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:50:15.496746   48808 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| addons    | minikube addons disable                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:38 UTC |
	|           | helm-tiller --alsologtostderr                                            |          |         |         |                     |                     |
	|           | -v=1                                                                     |          |         |         |                     |                     |
	| addons    | enable headlamp -p minikube                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| addons    | minikube addons disable                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|           | headlamp --alsologtostderr                                               |          |         |         |                     |                     |
	|           | -v=1                                                                     |          |         |         |                     |                     |
	| addons    | disable cloud-spanner -p                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|           | minikube                                                                 |          |         |         |                     |                     |
	| addons    | disable nvidia-device-plugin                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|           | -p minikube                                                              |          |         |         |                     |                     |
	| addons    | minikube addons disable yakd                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| stop      | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons    | enable dashboard -p minikube                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons    | disable dashboard -p minikube                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons    | disable gvisor -p minikube                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|           | --cert-expiration=3m                                                     |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|           | --cert-expiration=8760h                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start     | -p minikube --memory=4000                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|           | --apiserver-port=8441                                                    |          |         |         |                     |                     |
	|           | --wait=all --driver=none                                                 |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --alsologtostderr                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | -v=8                                                                     |          |         |         |                     |                     |
	| kubectl   | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | minikube get pods                                                        |          |         |         |                     |                     |
	| start     | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|           | --wait=all                                                               |          |         |         |                     |                     |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| config    | minikube config set cpus 2                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| dashboard | --url --port 36195 -p minikube                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:49:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:49:34.367697   44102 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:34.367796   44102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:34.367800   44102 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:34.367803   44102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:34.368000   44102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:49:34.368576   44102 out.go:352] Setting JSON to false
	I0916 10:49:34.369762   44102 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1925,"bootTime":1726481849,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:49:34.369864   44102 start.go:139] virtualization: kvm guest
	I0916 10:49:34.372207   44102 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:49:34.373528   44102 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:49:34.373549   44102 notify.go:220] Checking for updates...
	I0916 10:49:34.373596   44102 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:49:34.375134   44102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:49:34.376456   44102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:49:34.377827   44102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:49:34.379067   44102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:49:34.380215   44102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:49:34.381830   44102 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:49:34.381903   44102 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:49:34.382258   44102 exec_runner.go:51] Run: systemctl --version
	I0916 10:49:34.394119   44102 out.go:177] * Using the none driver based on existing profile
	I0916 10:49:34.395240   44102 start.go:297] selected driver: none
	I0916 10:49:34.395245   44102 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:34.395334   44102 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:49:34.395356   44102 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:49:34.396402   44102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:34.396426   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:34.396480   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:34.396521   44102 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:34.398149   44102 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:49:34.399432   44102 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:49:34.399661   44102 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:49:34.399737   44102 start.go:364] duration metric: took 42.937µs to acquireMachinesLock for "minikube"
	I0916 10:49:34.399751   44102 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:34.399756   44102 fix.go:54] fixHost starting: 
	I0916 10:49:34.400566   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:34.400578   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:49:34.400609   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:34.417610   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/42804/cgroup
	I0916 10:49:34.427297   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74"
	I0916 10:49:34.427349   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda5ababb2af12b481e591ddfe93ae3b1f/67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74/freezer.state
	I0916 10:49:34.435135   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:49:34.435172   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:34.438855   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:34.438872   44102 fix.go:112] recreateIfNeeded on minikube: state=Running err=<nil>
	W0916 10:49:34.438883   44102 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:34.440833   44102 out.go:177] * Updating the running none "minikube" bare metal machine ...
	I0916 10:49:34.442224   44102 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:49:34.442294   44102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:34.442340   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:34.450172   44102 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:49:34.450193   44102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:49:34.450204   44102 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:49:34.451555   44102 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:49:34.452660   44102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:49:34.452717   44102 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:49:34.452824   44102 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:49:34.452922   44102 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts -> hosts in /etc/test/nested/copy/11057
	I0916 10:49:34.452966   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11057
	I0916 10:49:34.460722   44102 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:49:34.460735   44102 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:49:34.460775   44102 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:49:34.468401   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:34.468524   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1744777790 /etc/ssl/certs/110572.pem
	I0916 10:49:34.476682   44102 exec_runner.go:144] found /etc/test/nested/copy/11057/hosts, removing ...
	I0916 10:49:34.476689   44102 exec_runner.go:203] rm: /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.476722   44102 exec_runner.go:51] Run: sudo rm -f /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.484139   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts --> /etc/test/nested/copy/11057/hosts (40 bytes)
	I0916 10:49:34.484250   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1533693540 /etc/test/nested/copy/11057/hosts
	I0916 10:49:34.491824   44102 start.go:296] duration metric: took 49.589157ms for postStartSetup
	I0916 10:49:34.491834   44102 fix.go:56] duration metric: took 92.078988ms for fixHost
	I0916 10:49:34.491838   44102 start.go:83] releasing machines lock for "minikube", held for 92.094707ms
	I0916 10:49:34.492251   44102 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:49:34.492337   44102 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:49:34.494437   44102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:49:34.494490   44102 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:34.502745   44102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:34.502761   44102 start.go:495] detecting cgroup driver to use...
	I0916 10:49:34.502779   44102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:34.502870   44102 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:34.518995   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:49:34.527877   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:49:34.537931   44102 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:49:34.537970   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:49:34.546628   44102 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:34.556292   44102 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:49:34.564848   44102 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:34.573674   44102 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:34.581850   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:49:34.590699   44102 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:49:34.599809   44102 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:49:34.608239   44102 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:34.615665   44102 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:34.622351   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:34.853534   44102 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:49:35.007953   44102 start.go:495] detecting cgroup driver to use...
	I0916 10:49:35.007987   44102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:35.008165   44102 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:35.030145   44102 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:49:35.031044   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:49:35.038626   44102 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:49:35.038639   44102 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.038674   44102 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.045787   44102 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:49:35.045915   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube470809814 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:49:35.053329   44102 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:49:35.282790   44102 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:49:35.519083   44102 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:49:35.519230   44102 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:49:35.519237   44102 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:49:35.519277   44102 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:49:35.527684   44102 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:49:35.527806   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2137543206 /etc/docker/daemon.json
	I0916 10:49:35.535734   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:35.767198   44102 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:49:46.284206   44102 exec_runner.go:84] Completed: sudo systemctl restart docker: (10.516973976s)
	I0916 10:49:46.284268   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:49:46.300861   44102 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:49:46.328981   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:46.342597   44102 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:49:46.549632   44102 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:49:46.763075   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:46.984012   44102 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:49:46.998648   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:49:47.011609   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:47.228695   44102 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:49:47.296032   44102 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:49:47.296087   44102 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:49:47.297517   44102 start.go:563] Will wait 60s for crictl version
	I0916 10:49:47.297559   44102 exec_runner.go:51] Run: which crictl
	I0916 10:49:47.298467   44102 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:49:47.327405   44102 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:49:47.327452   44102 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:47.352845   44102 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:49:47.386887   44102 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:49:47.387006   44102 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:47.390512   44102 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:49:47.392256   44102 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:49:47.394342   44102 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:47.394545   44102 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:49:47.394561   44102 kubeadm.go:934] updating node { 10.138.0.48 8441 v1.31.1 docker true true} ...
	I0916 10:49:47.394671   44102 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:49:47.394731   44102 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:49:47.491260   44102 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:49:47.491348   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:47.491370   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:47.491382   44102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:47.491411   44102 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containe
rRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:47.491618   44102 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:47.491688   44102 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:47.505358   44102 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:47.505403   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:49:47.520006   44102 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:49:47.520019   44102 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.520050   44102 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.530854   44102 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:49:47.531003   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2144099678 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:49:47.542188   44102 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:49:47.542200   44102 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:49:47.542239   44102 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:49:47.551571   44102 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:47.551736   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3534866455 /lib/systemd/system/kubelet.service
	I0916 10:49:47.564693   44102 exec_runner.go:144] found /var/tmp/minikube/kubeadm.yaml.new, removing ...
	I0916 10:49:47.564709   44102 exec_runner.go:203] rm: /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.564753   44102 exec_runner.go:51] Run: sudo rm -f /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.579911   44102 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2006 bytes)
	I0916 10:49:47.580073   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube844047420 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:47.590241   44102 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:47.591672   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:49:47.877109   44102 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:49:47.891401   44102 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:49:47.891417   44102 certs.go:194] generating shared ca certs ...
	I0916 10:49:47.891435   44102 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:47.891564   44102 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:49:47.891613   44102 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:49:47.891618   44102 certs.go:256] generating profile certs ...
	I0916 10:49:47.891686   44102 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:49:47.891720   44102 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:49:47.891748   44102 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:49:47.891839   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem (1338 bytes)
	W0916 10:49:47.891860   44102 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:47.891866   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:49:47.891886   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:49:47.891903   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:47.891920   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:49:47.891952   44102 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:49:47.892465   44102 exec_runner.go:144] found /var/lib/minikube/certs/ca.crt, removing ...
	I0916 10:49:47.892473   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.892502   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.900967   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:47.901138   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1210378531 /var/lib/minikube/certs/ca.crt
	I0916 10:49:47.912023   44102 exec_runner.go:144] found /var/lib/minikube/certs/ca.key, removing ...
	I0916 10:49:47.912037   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/ca.key
	I0916 10:49:47.912078   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/ca.key
	I0916 10:49:47.921462   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:49:47.921578   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube992492122 /var/lib/minikube/certs/ca.key
	I0916 10:49:47.930532   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.crt, removing ...
	I0916 10:49:47.930544   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.930574   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.941547   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:47.941735   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3597742140 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:47.949959   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client-ca.key, removing ...
	I0916 10:49:47.949972   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.950013   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.958535   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:47.958720   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube39691256 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:47.968590   44102 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.crt, removing ...
	I0916 10:49:47.968603   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.968639   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.979089   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:49:47.979255   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3184187309 /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:47.992482   44102 exec_runner.go:144] found /var/lib/minikube/certs/apiserver.key, removing ...
	I0916 10:49:47.992493   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/apiserver.key
	I0916 10:49:47.992527   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/apiserver.key
	I0916 10:49:48.004500   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:48.004654   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1788991639 /var/lib/minikube/certs/apiserver.key
	I0916 10:49:48.014833   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.crt, removing ...
	I0916 10:49:48.014847   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.014899   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.023719   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:48.023836   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1735539355 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:48.031596   44102 exec_runner.go:144] found /var/lib/minikube/certs/proxy-client.key, removing ...
	I0916 10:49:48.031607   44102 exec_runner.go:203] rm: /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.031636   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.040493   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:49:48.040612   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1102660184 /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:48.048037   44102 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:49:48.048046   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.048082   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.055311   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:48.055454   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284381175 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.062797   44102 exec_runner.go:144] found /usr/share/ca-certificates/11057.pem, removing ...
	I0916 10:49:48.062806   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.062832   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.070851   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem --> /usr/share/ca-certificates/11057.pem (1338 bytes)
	I0916 10:49:48.070962   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1901239915 /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.078323   44102 exec_runner.go:144] found /usr/share/ca-certificates/110572.pem, removing ...
	I0916 10:49:48.078331   44102 exec_runner.go:203] rm: /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.078357   44102 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.085407   44102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /usr/share/ca-certificates/110572.pem (1708 bytes)
	I0916 10:49:48.085507   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3710384688 /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.093097   44102 exec_runner.go:144] found /var/lib/minikube/kubeconfig, removing ...
	I0916 10:49:48.093105   44102 exec_runner.go:203] rm: /var/lib/minikube/kubeconfig
	I0916 10:49:48.093131   44102 exec_runner.go:51] Run: sudo rm -f /var/lib/minikube/kubeconfig
	I0916 10:49:48.100378   44102 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:48.100504   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2672007197 /var/lib/minikube/kubeconfig
	I0916 10:49:48.107945   44102 exec_runner.go:51] Run: openssl version
	I0916 10:49:48.110668   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:48.118824   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.120087   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:49 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.120116   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:48.122882   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:48.131516   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11057.pem && ln -fs /usr/share/ca-certificates/11057.pem /etc/ssl/certs/11057.pem"
	I0916 10:49:48.139894   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.141127   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1338 Sep 16 10:49 /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.141159   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11057.pem
	I0916 10:49:48.143998   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11057.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:48.151314   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110572.pem && ln -fs /usr/share/ca-certificates/110572.pem /etc/ssl/certs/110572.pem"
	I0916 10:49:48.160395   44102 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.161633   44102 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1708 Sep 16 10:49 /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.161664   44102 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110572.pem
	I0916 10:49:48.164355   44102 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110572.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:48.172002   44102 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:48.173296   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:48.176039   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:48.178835   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:48.181448   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:48.184066   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:48.186560   44102 exec_runner.go:51] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:48.189114   44102 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:48.189216   44102 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:48.205373   44102 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:49:48.213537   44102 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:49:48.213544   44102 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:49:48.213575   44102 exec_runner.go:51] Run: sudo test -d /data/minikube
	I0916 10:49:48.220853   44102 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.221144   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:49:48.222201   44102 exec_runner.go:51] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:48.229505   44102 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-09-16 10:48:41.770801188 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-09-16 10:49:47.577025778 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0916 10:49:48.229512   44102 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:49:48.229546   44102 exec_runner.go:51] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:49:48.252689   44102 docker.go:483] Stopping containers: [4c8dc9f7334c 3c1686a3f081 d36cca85a0cf 89edf012e73d 4045e763ce4d 01deb4e9cb0c 733fde545b97 8dde68f011d3 cddf26022ee7 4cc6aa8bc7d5 7b5dd454fcc4 13ae9078b412 b80696d65d3f a45299c063bb 6af15c63a009 0d522fc642e5 ff9c282d3903 552dd24d3b02 67e355cfcbda bd9bbeacd72d 76c209608f0b dc3e2cee9ae5 28927fc2d654 ad166eb13016 317985ddf47a 59ae2583e1f5 b51e183b7b46 a8e886cfa378 60d1d58f4944 6b9df597ae39 dc4e1eb7881a 5b34f2349a51 a1b484ea8be6 75baf2b9ae9f cb842334bb4e 33693827aa1a]
	I0916 10:49:48.252753   44102 exec_runner.go:51] Run: docker stop 4c8dc9f7334c 3c1686a3f081 d36cca85a0cf 89edf012e73d 4045e763ce4d 01deb4e9cb0c 733fde545b97 8dde68f011d3 cddf26022ee7 4cc6aa8bc7d5 7b5dd454fcc4 13ae9078b412 b80696d65d3f a45299c063bb 6af15c63a009 0d522fc642e5 ff9c282d3903 552dd24d3b02 67e355cfcbda bd9bbeacd72d 76c209608f0b dc3e2cee9ae5 28927fc2d654 ad166eb13016 317985ddf47a 59ae2583e1f5 b51e183b7b46 a8e886cfa378 60d1d58f4944 6b9df597ae39 dc4e1eb7881a 5b34f2349a51 a1b484ea8be6 75baf2b9ae9f cb842334bb4e 33693827aa1a
	I0916 10:49:48.442643   44102 exec_runner.go:51] Run: sudo systemctl stop kubelet
	I0916 10:49:48.560784   44102 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:49:48.569413   44102 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 16 10:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Sep 16 10:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 16 10:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5599 Sep 16 10:48 /etc/kubernetes/scheduler.conf
	
	I0916 10:49:48.569461   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0916 10:49:48.577497   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0916 10:49:48.585125   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0916 10:49:48.593954   44102 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.593994   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:49:48.601757   44102 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0916 10:49:48.609635   44102 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: exit status 1
	stdout:
	
	stderr:
	I0916 10:49:48.609678   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:49:48.617071   44102 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:49:48.625228   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:48.665763   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:49.684392   44102 exec_runner.go:84] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.01860289s)
	I0916 10:49:49.684410   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:49.970734   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:50.016697   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:50.077897   44102 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:49:50.077968   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:50.578901   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:51.078602   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:49:51.092825   44102 api_server.go:72] duration metric: took 1.014927236s to wait for apiserver process to appear ...
	I0916 10:49:51.092842   44102 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:49:51.092863   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.483754   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 10:49:53.483770   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 10:49:53.483783   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.521507   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:53.521527   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:53.593683   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:53.597734   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:53.597754   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:54.093924   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:54.097428   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:54.097446   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:54.593822   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:54.601916   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:49:54.601932   44102 api_server.go:103] status: https://10.138.0.48:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:49:55.093540   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:49:55.097489   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:49:55.102727   44102 api_server.go:141] control plane version: v1.31.1
	I0916 10:49:55.102741   44102 api_server.go:131] duration metric: took 4.009894582s to wait for apiserver health ...
	I0916 10:49:55.102748   44102 cni.go:84] Creating CNI manager for ""
	I0916 10:49:55.102757   44102 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:49:55.104363   44102 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:49:55.105580   44102 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:49:55.115275   44102 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:49:55.115383   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube160449351 /etc/cni/net.d/1-k8s.conflist
	I0916 10:49:55.124322   44102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:49:55.133278   44102 system_pods.go:59] 7 kube-system pods found
	I0916 10:49:55.133294   44102 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:49:55.133299   44102 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:49:55.133305   44102 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:49:55.133310   44102 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:49:55.133314   44102 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 10:49:55.133318   44102 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 10:49:55.133322   44102 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:49:55.133327   44102 system_pods.go:74] duration metric: took 8.997814ms to wait for pod list to return data ...
	I0916 10:49:55.133332   44102 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:49:55.136280   44102 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:55.136297   44102 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:55.136306   44102 node_conditions.go:105] duration metric: took 2.970939ms to run NodePressure ...
	I0916 10:49:55.136319   44102 exec_runner.go:51] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:49:55.378848   44102 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 10:49:55.382416   44102 kubeadm.go:739] kubelet initialised
	I0916 10:49:55.382425   44102 kubeadm.go:740] duration metric: took 3.564162ms waiting for restarted kubelet to initialise ...
	I0916 10:49:55.382430   44102 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:55.386974   44102 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:57.392689   44102 pod_ready.go:103] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"False"
	I0916 10:49:57.892929   44102 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:57.892941   44102 pod_ready.go:82] duration metric: took 2.505952837s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:57.892948   44102 pod_ready.go:79] waiting up to 4m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:59.898724   44102 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:02.398645   44102 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:02.398655   44102 pod_ready.go:82] duration metric: took 4.505702789s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:02.398664   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:04.403969   44102 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:06.404601   44102 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.404611   44102 pod_ready.go:82] duration metric: took 4.005942832s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.404619   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.409868   44102 pod_ready.go:103] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:08.910387   44102 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.910398   44102 pod_ready.go:82] duration metric: took 2.505774179s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.910405   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.914996   44102 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.915009   44102 pod_ready.go:82] duration metric: took 4.598106ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.915019   44102 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.919034   44102 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.919042   44102 pod_ready.go:82] duration metric: took 4.017487ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.919050   44102 pod_ready.go:39] duration metric: took 13.536612391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:08.919069   44102 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:50:08.928241   44102 ops.go:34] apiserver oom_adj: -16
	I0916 10:50:08.928249   44102 kubeadm.go:597] duration metric: took 20.714700355s to restartPrimaryControlPlane
	I0916 10:50:08.928254   44102 kubeadm.go:394] duration metric: took 20.73914576s to StartCluster
	I0916 10:50:08.928267   44102 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:08.928326   44102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:08.928829   44102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:08.929108   44102 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:50:08.929178   44102 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:50:08.929190   44102 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	W0916 10:50:08.929195   44102 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:50:08.929198   44102 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:50:08.929214   44102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:50:08.929217   44102 host.go:66] Checking if "minikube" exists ...
	I0916 10:50:08.929232   44102 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:08.929617   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.929625   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.929651   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.929686   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.929694   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.929746   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.931620   44102 out.go:177] * Configuring local host environment ...
	W0916 10:50:08.933142   44102 out.go:270] * 
	W0916 10:50:08.933162   44102 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:50:08.933167   44102 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:50:08.933170   44102 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:50:08.933174   44102 out.go:270] * 
	W0916 10:50:08.933209   44102 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:50:08.933216   44102 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:50:08.933219   44102 out.go:270] * 
	W0916 10:50:08.933240   44102 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:50:08.933262   44102 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:50:08.933270   44102 out.go:270] * 
	W0916 10:50:08.933275   44102 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:50:08.933310   44102 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:50:08.934686   44102 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:08.936373   44102 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:50:08.946768   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.948421   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.957203   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:08.957255   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:08.958773   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:08.958808   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:08.967420   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:08.967441   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:08.967696   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:08.967711   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:08.971852   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:08.972292   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:08.972953   44102 addons.go:234] Setting addon default-storageclass=true in "minikube"
	W0916 10:50:08.972961   44102 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:50:08.972979   44102 host.go:66] Checking if "minikube" exists ...
	I0916 10:50:08.973448   44102 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8441"
	I0916 10:50:08.973455   44102 api_server.go:166] Checking apiserver status ...
	I0916 10:50:08.973479   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:08.973984   44102 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:50:08.975365   44102 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.975381   44102 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:50:08.975386   44102 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.975417   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.983324   44102 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:50:08.983471   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1649761568 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:08.991981   44102 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/46869/cgroup
	I0916 10:50:08.994149   44102 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:09.003412   44102 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0"
	I0916 10:50:09.003491   44102 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4642e2c137134acfd9b1b4b4e9aa2fbd/46d889fefcb7ac0e24fe20eb009d1a7a242d9948e1828a0255e773dc221a1fa0/freezer.state
	I0916 10:50:09.014326   44102 api_server.go:204] freezer state: "THAWED"
	I0916 10:50:09.014349   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:09.018714   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:09.018754   44102 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.018771   44102 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:50:09.018778   44102 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.018822   44102 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.038400   44102 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:50:09.038571   44102 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2232686379 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.051205   44102 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:09.256034   44102 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:50:09.269235   44102 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:50:09.271890   44102 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0916 10:50:09.271899   44102 node_ready.go:38] duration metric: took 2.646283ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0916 10:50:09.271905   44102 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:09.276337   44102 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.280639   44102 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.280647   44102 pod_ready.go:82] duration metric: took 4.300934ms for pod "coredns-7c65d6cfc9-9tmvq" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.280654   44102 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.308578   44102 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.308591   44102 pod_ready.go:82] duration metric: took 27.93217ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.308599   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.492600   44102 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:50:09.494021   44102 addons.go:510] duration metric: took 564.915064ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:50:09.708409   44102 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.708421   44102 pod_ready.go:82] duration metric: took 399.817476ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.708431   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.108446   44102 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.108456   44102 pod_ready.go:82] duration metric: took 400.019969ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.108466   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.508234   44102 pod_ready.go:93] pod "kube-proxy-lt5f5" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.508255   44102 pod_ready.go:82] duration metric: took 399.773468ms for pod "kube-proxy-lt5f5" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.508264   44102 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.908192   44102 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.908203   44102 pod_ready.go:82] duration metric: took 399.935295ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.908212   44102 pod_ready.go:39] duration metric: took 1.636299031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:10.908227   44102 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:50:10.908289   44102 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:10.922119   44102 api_server.go:72] duration metric: took 1.988780115s to wait for apiserver process to appear ...
	I0916 10:50:10.922134   44102 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:50:10.922153   44102 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8441/healthz ...
	I0916 10:50:10.925548   44102 api_server.go:279] https://10.138.0.48:8441/healthz returned 200:
	ok
	I0916 10:50:10.926399   44102 api_server.go:141] control plane version: v1.31.1
	I0916 10:50:10.926408   44102 api_server.go:131] duration metric: took 4.269595ms to wait for apiserver health ...
	I0916 10:50:10.926414   44102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:50:11.110513   44102 system_pods.go:59] 7 kube-system pods found
	I0916 10:50:11.110526   44102 system_pods.go:61] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:50:11.110529   44102 system_pods.go:61] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:50:11.110532   44102 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running
	I0916 10:50:11.110536   44102 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:50:11.110538   44102 system_pods.go:61] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:50:11.110541   44102 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:50:11.110543   44102 system_pods.go:61] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running
	I0916 10:50:11.110548   44102 system_pods.go:74] duration metric: took 184.129488ms to wait for pod list to return data ...
	I0916 10:50:11.110554   44102 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:50:11.308206   44102 default_sa.go:45] found service account: "default"
	I0916 10:50:11.308219   44102 default_sa.go:55] duration metric: took 197.660035ms for default service account to be created ...
	I0916 10:50:11.308225   44102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:50:11.510357   44102 system_pods.go:86] 7 kube-system pods found
	I0916 10:50:11.510371   44102 system_pods.go:89] "coredns-7c65d6cfc9-9tmvq" [64b157a7-a274-493f-ad2d-3eb841c345db] Running
	I0916 10:50:11.510376   44102 system_pods.go:89] "etcd-ubuntu-20-agent-2" [3c8b28a0-6d7f-43b4-b42b-c4a47eab96fb] Running
	I0916 10:50:11.510379   44102 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [4a0a9d93-9f46-4cd7-a3fd-1f7370245887] Running
	I0916 10:50:11.510382   44102 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [45d39430-8de5-404d-a2b8-bbf47738a4c7] Running
	I0916 10:50:11.510385   44102 system_pods.go:89] "kube-proxy-lt5f5" [2e01c31f-c798-45c0-98a2-ee94c3b9d631] Running
	I0916 10:50:11.510387   44102 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [a9041542-d7b5-4571-87c5-a6e9e4ecfd5e] Running
	I0916 10:50:11.510389   44102 system_pods.go:89] "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running
	I0916 10:50:11.510395   44102 system_pods.go:126] duration metric: took 202.165936ms to wait for k8s-apps to be running ...
	I0916 10:50:11.510400   44102 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:50:11.510443   44102 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:11.522234   44102 system_svc.go:56] duration metric: took 11.824388ms WaitForService to wait for kubelet
	I0916 10:50:11.522250   44102 kubeadm.go:582] duration metric: took 2.588917885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:50:11.522265   44102 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:50:11.708617   44102 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:11.708628   44102 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:11.708635   44102 node_conditions.go:105] duration metric: took 186.36639ms to run NodePressure ...
	I0916 10:50:11.708644   44102 start.go:241] waiting for startup goroutines ...
	I0916 10:50:11.708649   44102 start.go:246] waiting for cluster config update ...
	I0916 10:50:11.708658   44102 start.go:255] writing updated cluster config ...
	I0916 10:50:11.708906   44102 exec_runner.go:51] Run: rm -f paused
	I0916 10:50:11.712704   44102 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:50:11.713754   44102 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:16 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.327206842Z" level=info msg="ignoring event" container=cddf26022ee7468f6f5285ac9605b017ab7d59d05196a64ee72b6fb2c37a931d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.327260261Z" level=info msg="ignoring event" container=4c8dc9f7334c2a7afc6de182bab4178101d4c2627439740504f7e17f85dde35c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/396cdfa7884cc327569a77054f27020715549649b6a7fd3b233783d296023cb9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206254bc5172ca5de6cd75834006383ffaea64ecd25d9953cb741a27628a5a9f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	76cbcdfc11b3b       c69fa2e9cbf5f       22 seconds ago       Running             coredns                   2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db       22 seconds ago       Running             storage-provisioner       4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3       22 seconds ago       Running             kube-proxy                3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575       26 seconds ago       Running             kube-scheduler            3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06       26 seconds ago       Running             etcd                      3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d       26 seconds ago       Running             kube-controller-manager   3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100       26 seconds ago       Running             kube-apiserver            0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d       29 seconds ago       Exited              kube-controller-manager   2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575       29 seconds ago       Exited              kube-scheduler            2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3       29 seconds ago       Exited              kube-proxy                2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06       29 seconds ago       Exited              etcd                      2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db       30 seconds ago       Created             storage-provisioner       3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f       57 seconds ago       Exited              coredns                   1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	67e355cfcbda0       6bab7719df100       About a minute ago   Exited              kube-apiserver            1                   28927fc2d6545       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     81s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         87s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-n42l6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ft6nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 21s                kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientPID     86s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 86s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  86s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           82s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           55s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 26s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 26s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    26s (x7 over 26s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           20s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 48 11 a5 11 65 08 06
	[  +0.010011] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:50:16 up 32 min,  0 users,  load average: 0.92, 0.51, 0.30
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:50:15.353595       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:50:15.383572       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:50:15.462425       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.78.248"}
	I0916 10:50:15.474116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.44.150"}
	
	
	==> kube-apiserver [67e355cfcbda] <==
	W0916 10:49:45.070608       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.109161       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120779       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120899       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.134173       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.149767       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.185767       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.187044       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.304341       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.320994       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.344654       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.353348       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.380165       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.387448       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.409947       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.461534       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.512147       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.532416       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.603473       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.683743       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.694566       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.695882       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.698138       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.773255       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.792702       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:57.053832       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:57.101503       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:49:57.463535       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:57.495994       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:49:57.496027       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:49:57.813840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70585ms"
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	I0916 10:50:15.401924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.824932ms"
	E0916 10:50:15.401965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.228174ms"
	E0916 10:50:15.406402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.685351ms"
	E0916 10:50:15.406718       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.684096ms"
	E0916 10:50:15.412650       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.953128ms"
	E0916 10:50:15.413009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.425332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.293667ms"
	I0916 10:50:15.431862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.471098ms"
	I0916 10:50:15.431951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.442µs"
	I0916 10:50:15.435557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.685µs"
	I0916 10:50:15.444643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.47397ms"
	I0916 10:50:15.450160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.47806ms"
	I0916 10:50:15.450257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.466µs"
	I0916 10:50:15.455986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.936µs"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:16 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: E0916 10:49:50.430915   46464 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 10.138.0.48:8441: connect: connection refused" node="ubuntu-20-agent-2"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: E0916 10:49:50.645746   46464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ubuntu-20-agent-2?timeout=10s\": dial tcp 10.138.0.48:8441: connect: connection refused" interval="800ms"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: E0916 10:50:15.424498   46464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.424567   46464 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531002   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c77012c-f486-455a-948c-0a12d040e2d0-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531047   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp2t\" (UniqueName: \"kubernetes.io/projected/0b84536b-e981-44f8-9021-6593d46481c1-kube-api-access-nzp2t\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531072   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4t6\" (UniqueName: \"kubernetes.io/projected/2c77012c-f486-455a-948c-0a12d040e2d0-kube-api-access-tz4t6\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531091   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0b84536b-e981-44f8-9021-6593d46481c1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.638442   46464 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (428.826µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/DashboardCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (517.162µs)
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"|----------------------|---------------------------|--------------|-----|\n|      NAMESPACE       |           NAME            | TARGET PORT  | URL |\n|----------------------|---------------------------|--------------|-----|\n| default              | kubernetes                | No node port |     |\n| kube-system          | kube-dns                  | No node port |     |\n| kubernetes-dashboard | dashboard-metrics-scraper | No node port |     |\n| kubernetes-dashboard | kubernetes-dashboard      | No node port |     |\n|----------------------|---------------------------|--------------|-----|\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "168.743031ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
functional_test.go:1498: expected the json of 'service list' to include "hello-node" but got *"[{\"Namespace\":\"default\",\"Name\":\"kubernetes\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kube-system\",\"Name\":\"kube-dns\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kubernetes-dashboard\",\"Name\":\"dashboard-metrics-scraper\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kubernetes-dashboard\",\"Name\":\"kubernetes-dashboard\",\"URLs\":[],\"PortNames\":[\"No node port\"]}]"*. args: "out/minikube-linux-amd64 -p minikube service list -o json"
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node: exit status 115 (149.757663ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1511: failed to get service url. args "out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}: exit status 115 (157.731618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube service hello-node --url: exit status 115 (161.918114ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1561: failed to get service url. args: "out/minikube-linux-amd64 -p minikube service hello-node --url": exit status 115
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (466.099µs)
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context minikube describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context minikube describe po hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (389.092µs)
functional_test.go:1604: "kubectl --context minikube describe po hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context minikube logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context minikube logs -l app=hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (426.119µs)
functional_test.go:1610: "kubectl --context minikube logs -l app=hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context minikube describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context minikube describe svc hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (372.09µs)
functional_test.go:1616: "kubectl --context minikube describe svc hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| addons    | disable dashboard -p minikube                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| addons    | disable gvisor -p minikube                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|           | --cert-expiration=3m                                                     |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|           | --cert-expiration=8760h                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start     | -p minikube --memory=4000                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|           | --apiserver-port=8441                                                    |          |         |         |                     |                     |
	|           | --wait=all --driver=none                                                 |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --alsologtostderr                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | -v=8                                                                     |          |         |         |                     |                     |
	| kubectl   | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | minikube get pods                                                        |          |         |         |                     |                     |
	| start     | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|           | --wait=all                                                               |          |         |         |                     |                     |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| config    | minikube config set cpus 2                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| dashboard | --url --port 36195 -p minikube                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	|           | -v=1 --driver=none                                                       |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| service   | minikube service list                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service list -o json                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service                                                         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --namespace=default --https                                              |          |         |         |                     |                     |
	|           | --url hello-node                                                         |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url --format={{.IP}}                                                   |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url                                                                    |          |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:50:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:50:17.013809   49522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:17.013928   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.013940   49522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:17.013947   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.014283   49522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:17.014884   49522 out.go:352] Setting JSON to false
	I0916 10:50:17.016300   49522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:17.016418   49522 start.go:139] virtualization: kvm guest
	I0916 10:50:17.018914   49522 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:17.020443   49522 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:17.020481   49522 notify.go:220] Checking for updates...
	I0916 10:50:17.020483   49522 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:17.021852   49522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:17.023292   49522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:17.024682   49522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:17.025975   49522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:17.027472   49522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:17.029411   49522 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:17.029834   49522 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:17.032099   49522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:17.042311   49522 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 10:50:17.043885   49522 start.go:297] selected driver: none
	I0916 10:50:17.043900   49522 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:17.044037   49522 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:17.044058   49522 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:17.044345   49522 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 10:50:17.046514   49522 out.go:201] 
	W0916 10:50:17.047718   49522 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:50:17.049056   49522 out.go:201] 
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:21 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/396cdfa7884cc327569a77054f27020715549649b6a7fd3b233783d296023cb9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206254bc5172ca5de6cd75834006383ffaea64ecd25d9953cb741a27628a5a9f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:19 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:19Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 10:50:21 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:21Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED                  STATE               NAME                        ATTEMPT             POD ID              POD
	f3dad1361e62c       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   Less than a second ago   Running             dashboard-metrics-scraper   0                   206254bc5172c       dashboard-metrics-scraper-c5db448b4-n42l6
	b7dca8e1a7411       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 seconds ago            Running             kubernetes-dashboard        0                   396cdfa7884cc       kubernetes-dashboard-695b96c756-ft6nz
	76cbcdfc11b3b       c69fa2e9cbf5f                                                                                          27 seconds ago           Running             coredns                     2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db                                                                                          27 seconds ago           Running             storage-provisioner         4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3                                                                                          27 seconds ago           Running             kube-proxy                  3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575                                                                                          31 seconds ago           Running             kube-scheduler              3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06                                                                                          31 seconds ago           Running             etcd                        3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d                                                                                          31 seconds ago           Running             kube-controller-manager     3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100                                                                                          31 seconds ago           Running             kube-apiserver              0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d                                                                                          34 seconds ago           Exited              kube-controller-manager     2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575                                                                                          34 seconds ago           Exited              kube-scheduler              2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3                                                                                          34 seconds ago           Exited              kube-proxy                  2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06                                                                                          34 seconds ago           Exited              etcd                        2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db                                                                                          35 seconds ago           Created             storage-provisioner         3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f                                                                                          About a minute ago       Exited              coredns                     1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	67e355cfcbda0       6bab7719df100                                                                                          About a minute ago       Exited              kube-apiserver              1                   28927fc2d6545       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:49:53 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         92s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-n42l6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ft6nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 85s                kube-proxy       
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 63s                kube-proxy       
	  Normal   NodeHasSufficientPID     91s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 91s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  91s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           87s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           60s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  31s (x8 over 31s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 31s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    31s (x7 over 31s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s (x7 over 31s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           25s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	[Sep16 10:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 9c f5 dc 07 74 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:50:21 up 32 min,  0 users,  load average: 0.94, 0.53, 0.30
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:50:15.353595       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:50:15.383572       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:50:15.462425       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.78.248"}
	I0916 10:50:15.474116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.44.150"}
	
	
	==> kube-apiserver [67e355cfcbda] <==
	W0916 10:49:45.070608       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.109161       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120779       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.120899       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.134173       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.149767       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.185767       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.187044       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.304341       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.320994       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.344654       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.353348       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.380165       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.387448       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.409947       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.461534       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.512147       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.532416       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.603473       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.683743       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.694566       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.695882       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.698138       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.773255       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:49:45.792702       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:57.496027       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:49:57.813840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.70585ms"
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	I0916 10:50:15.401924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.824932ms"
	E0916 10:50:15.401965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.228174ms"
	E0916 10:50:15.406402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.685351ms"
	E0916 10:50:15.406718       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.684096ms"
	E0916 10:50:15.412650       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.953128ms"
	E0916 10:50:15.413009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.425332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.293667ms"
	I0916 10:50:15.431862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.471098ms"
	I0916 10:50:15.431951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.442µs"
	I0916 10:50:15.435557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.685µs"
	I0916 10:50:15.444643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.47397ms"
	I0916 10:50:15.450160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.47806ms"
	I0916 10:50:15.450257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.466µs"
	I0916 10:50:15.455986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.936µs"
	I0916 10:50:21.485772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.612481ms"
	I0916 10:50:21.485883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.788µs"
	I0916 10:50:21.496496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.060642ms"
	I0916 10:50:21.496566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.635µs"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:50:21 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: E0916 10:49:50.645746   46464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ubuntu-20-agent-2?timeout=10s\": dial tcp 10.138.0.48:8441: connect: connection refused" interval="800ms"
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: E0916 10:50:15.424498   46464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.424567   46464 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531002   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c77012c-f486-455a-948c-0a12d040e2d0-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531047   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp2t\" (UniqueName: \"kubernetes.io/projected/0b84536b-e981-44f8-9021-6593d46481c1-kube-api-access-nzp2t\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531072   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4t6\" (UniqueName: \"kubernetes.io/projected/2c77012c-f486-455a-948c-0a12d040e2d0-kube-api-access-tz4t6\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531091   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0b84536b-e981-44f8-9021-6593d46481c1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.638442   46464 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:50:21 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:21.489031   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6" podStartSLOduration=1.288104906 podStartE2EDuration="6.489005142s" podCreationTimestamp="2024-09-16 10:50:15 +0000 UTC" firstStartedPulling="2024-09-16 10:50:16.00867893 +0000 UTC m=+26.037713028" lastFinishedPulling="2024-09-16 10:50:21.20957917 +0000 UTC m=+31.238613264" observedRunningTime="2024-09-16 10:50:21.4795614 +0000 UTC m=+31.508595511" watchObservedRunningTime="2024-09-16 10:50:21.489005142 +0000 UTC m=+31.518039250"
	
	
	==> kubernetes-dashboard [b7dca8e1a741] <==
	2024/09/16 10:50:20 Using namespace: kubernetes-dashboard
	2024/09/16 10:50:20 Using in-cluster config to connect to apiserver
	2024/09/16 10:50:20 Using secret token for csrf signing
	2024/09/16 10:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:50:20 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:50:20 Generating JWE encryption key
	2024/09/16 10:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:50:21 Initializing JWE encryption key from synchronized object
	2024/09/16 10:50:21 Creating in-cluster Sidecar client
	2024/09/16 10:50:21 Serving insecurely on HTTP port: 9090
	2024/09/16 10:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (422.703µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (106.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dfe4a726-3764-4daf-a322-8f33ae3528f7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00289734s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (527.14µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (514.48µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (483.605µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (553.223µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (528.756µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (494.321µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (501.426µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (533.814µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (503.218µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (508.803µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (500.569µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (547.635µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context minikube get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (532.943µs)
functional_test_pvc_test.go:65: failed to check for storage class: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:69: (dbg) Non-zero exit: kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (385.513µs)
functional_test_pvc_test.go:71: kubectl apply pvc.yaml failed: args "kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:44 UTC |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:44 UTC | 16 Sep 24 10:45 UTC |
	|           | --cert-expiration=3m                                                     |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --memory=2048                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	|           | --cert-expiration=8760h                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start     | -p minikube --memory=4000                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|           | --apiserver-port=8441                                                    |          |         |         |                     |                     |
	|           | --wait=all --driver=none                                                 |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --alsologtostderr                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | -v=8                                                                     |          |         |         |                     |                     |
	| kubectl   | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | minikube get pods                                                        |          |         |         |                     |                     |
	| start     | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|           | --wait=all                                                               |          |         |         |                     |                     |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| config    | minikube config set cpus 2                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| dashboard | --url --port 36195 -p minikube                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	|           | -v=1 --driver=none                                                       |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| service   | minikube service list                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service list -o json                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service                                                         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --namespace=default --https                                              |          |         |         |                     |                     |
	|           | --url hello-node                                                         |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url --format={{.IP}}                                                   |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url                                                                    |          |         |         |                     |                     |
	| addons    | minikube addons list                                                     | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| addons    | minikube addons list -o json                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:50:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:50:17.013809   49522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:17.013928   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.013940   49522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:17.013947   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.014283   49522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:17.014884   49522 out.go:352] Setting JSON to false
	I0916 10:50:17.016300   49522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:17.016418   49522 start.go:139] virtualization: kvm guest
	I0916 10:50:17.018914   49522 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:17.020443   49522 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:17.020481   49522 notify.go:220] Checking for updates...
	I0916 10:50:17.020483   49522 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:17.021852   49522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:17.023292   49522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:17.024682   49522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:17.025975   49522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:17.027472   49522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:17.029411   49522 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:17.029834   49522 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:17.032099   49522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:17.042311   49522 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 10:50:17.043885   49522 start.go:297] selected driver: none
	I0916 10:50:17.043900   49522 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:17.044037   49522 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:17.044058   49522 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:17.044345   49522 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 10:50:17.046514   49522 out.go:201] 
	W0916 10:50:17.047718   49522 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:50:17.049056   49522 out.go:201] 
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:52:08 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/396cdfa7884cc327569a77054f27020715549649b6a7fd3b233783d296023cb9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206254bc5172ca5de6cd75834006383ffaea64ecd25d9953cb741a27628a5a9f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:19 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:19Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 10:50:21 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:21Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	f3dad1361e62c       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   About a minute ago   Running             dashboard-metrics-scraper   0                   206254bc5172c       dashboard-metrics-scraper-c5db448b4-n42l6
	b7dca8e1a7411       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   396cdfa7884cc       kubernetes-dashboard-695b96c756-ft6nz
	76cbcdfc11b3b       c69fa2e9cbf5f                                                                                          2 minutes ago        Running             coredns                     2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db                                                                                          2 minutes ago        Running             storage-provisioner         4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3                                                                                          2 minutes ago        Running             kube-proxy                  3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575                                                                                          2 minutes ago        Running             kube-scheduler              3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06                                                                                          2 minutes ago        Running             etcd                        3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d                                                                                          2 minutes ago        Running             kube-controller-manager     3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100                                                                                          2 minutes ago        Running             kube-apiserver              0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d                                                                                          2 minutes ago        Exited              kube-controller-manager     2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575                                                                                          2 minutes ago        Exited              kube-scheduler              2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3                                                                                          2 minutes ago        Exited              kube-proxy                  2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06                                                                                          2 minutes ago        Exited              etcd                        2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db                                                                                          2 minutes ago        Created             storage-provisioner         3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f                                                                                          2 minutes ago        Exited              coredns                     1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:52:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m13s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m19s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-n42l6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ft6nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m11s                  kube-proxy       
	  Normal   Starting                 2m13s                  kube-proxy       
	  Normal   Starting                 2m50s                  kube-proxy       
	  Normal   NodeHasSufficientPID     3m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 3m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m14s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           2m47s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m18s (x7 over 2m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m12s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	[Sep16 10:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 9c f5 dc 07 74 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:52:08 up 34 min,  0 users,  load average: 0.26, 0.40, 0.27
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:50:15.353595       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:50:15.383572       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:50:15.462425       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.78.248"}
	I0916 10:50:15.474116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.44.150"}
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	I0916 10:50:15.401924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.824932ms"
	E0916 10:50:15.401965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.228174ms"
	E0916 10:50:15.406402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.685351ms"
	E0916 10:50:15.406718       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.684096ms"
	E0916 10:50:15.412650       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.953128ms"
	E0916 10:50:15.413009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.425332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.293667ms"
	I0916 10:50:15.431862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.471098ms"
	I0916 10:50:15.431951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.442µs"
	I0916 10:50:15.435557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.685µs"
	I0916 10:50:15.444643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.47397ms"
	I0916 10:50:15.450160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.47806ms"
	I0916 10:50:15.450257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.466µs"
	I0916 10:50:15.455986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.936µs"
	I0916 10:50:21.485772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.612481ms"
	I0916 10:50:21.485883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.788µs"
	I0916 10:50:21.496496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.060642ms"
	I0916 10:50:21.496566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.635µs"
	I0916 10:50:24.392883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:50:55.024723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:52:08 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: E0916 10:50:15.424498   46464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.424567   46464 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531002   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c77012c-f486-455a-948c-0a12d040e2d0-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531047   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp2t\" (UniqueName: \"kubernetes.io/projected/0b84536b-e981-44f8-9021-6593d46481c1-kube-api-access-nzp2t\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531072   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4t6\" (UniqueName: \"kubernetes.io/projected/2c77012c-f486-455a-948c-0a12d040e2d0-kube-api-access-tz4t6\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531091   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0b84536b-e981-44f8-9021-6593d46481c1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.638442   46464 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:50:21 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:21.489031   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6" podStartSLOduration=1.288104906 podStartE2EDuration="6.489005142s" podCreationTimestamp="2024-09-16 10:50:15 +0000 UTC" firstStartedPulling="2024-09-16 10:50:16.00867893 +0000 UTC m=+26.037713028" lastFinishedPulling="2024-09-16 10:50:21.20957917 +0000 UTC m=+31.238613264" observedRunningTime="2024-09-16 10:50:21.4795614 +0000 UTC m=+31.508595511" watchObservedRunningTime="2024-09-16 10:50:21.489005142 +0000 UTC m=+31.518039250"
	Sep 16 10:50:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:50.228146   46464 scope.go:117] "RemoveContainer" containerID="67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74"
	
	
	==> kubernetes-dashboard [b7dca8e1a741] <==
	2024/09/16 10:50:20 Using namespace: kubernetes-dashboard
	2024/09/16 10:50:20 Using in-cluster config to connect to apiserver
	2024/09/16 10:50:20 Using secret token for csrf signing
	2024/09/16 10:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:50:20 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:50:20 Generating JWE encryption key
	2024/09/16 10:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:50:21 Initializing JWE encryption key from synchronized object
	2024/09/16 10:50:21 Creating in-cluster Sidecar client
	2024/09/16 10:50:21 Serving insecurely on HTTP port: 9090
	2024/09/16 10:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 10:50:51 Successful request to sidecar
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (394.37µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (106.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context minikube apply -f testdata/testsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (464.985µs)
functional_test_tunnel_test.go:214: kubectl --context minikube apply -f testdata/testsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context minikube get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context minikube get svc nginx-svc: fork/exec /usr/local/bin/kubectl: exec format error (587.912µs)
functional_test_tunnel_test.go:292: kubectl --context minikube get svc nginx-svc failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context minikube replace --force -f testdata/mysql.yaml: fork/exec /usr/local/bin/kubectl: exec format error (456.774µs)
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context minikube replace --force -f testdata/mysql.yaml" failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete    | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:48 UTC |
	| start     | -p minikube --memory=4000                                                | minikube | jenkins | v1.34.0 | 16 Sep 24 10:48 UTC | 16 Sep 24 10:49 UTC |
	|           | --apiserver-port=8441                                                    |          |         |         |                     |                     |
	|           | --wait=all --driver=none                                                 |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --alsologtostderr                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | -v=8                                                                     |          |         |         |                     |                     |
	| kubectl   | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|           | minikube get pods                                                        |          |         |         |                     |                     |
	| start     | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|           | --wait=all                                                               |          |         |         |                     |                     |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| config    | minikube config set cpus 2                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config    | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| dashboard | --url --port 36195 -p minikube                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	|           | -v=1 --driver=none                                                       |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start     | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|           | --driver=none                                                            |          |         |         |                     |                     |
	|           | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| service   | minikube service list                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service list -o json                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service   | minikube service                                                         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --namespace=default --https                                              |          |         |         |                     |                     |
	|           | --url hello-node                                                         |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url --format={{.IP}}                                                   |          |         |         |                     |                     |
	| service   | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|           | --url                                                                    |          |         |         |                     |                     |
	| addons    | minikube addons list                                                     | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| addons    | minikube addons list -o json                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| tunnel    | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	| tunnel    | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	| tunnel    | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|           | --alsologtostderr                                                        |          |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:50:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:50:17.013809   49522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:17.013928   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.013940   49522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:17.013947   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.014283   49522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:17.014884   49522 out.go:352] Setting JSON to false
	I0916 10:50:17.016300   49522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:17.016418   49522 start.go:139] virtualization: kvm guest
	I0916 10:50:17.018914   49522 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:17.020443   49522 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:17.020481   49522 notify.go:220] Checking for updates...
	I0916 10:50:17.020483   49522 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:17.021852   49522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:17.023292   49522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:17.024682   49522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:17.025975   49522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:17.027472   49522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:17.029411   49522 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:17.029834   49522 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:17.032099   49522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:17.042311   49522 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 10:50:17.043885   49522 start.go:297] selected driver: none
	I0916 10:50:17.043900   49522 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:17.044037   49522 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:17.044058   49522 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:17.044345   49522 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 10:50:17.046514   49522 out.go:201] 
	W0916 10:50:17.047718   49522 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:50:17.049056   49522 out.go:201] 
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:53:53 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/396cdfa7884cc327569a77054f27020715549649b6a7fd3b233783d296023cb9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206254bc5172ca5de6cd75834006383ffaea64ecd25d9953cb741a27628a5a9f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:19 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:19Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 10:50:21 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:21Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f3dad1361e62c       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 minutes ago       Running             dashboard-metrics-scraper   0                   206254bc5172c       dashboard-metrics-scraper-c5db448b4-n42l6
	b7dca8e1a7411       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         3 minutes ago       Running             kubernetes-dashboard        0                   396cdfa7884cc       kubernetes-dashboard-695b96c756-ft6nz
	76cbcdfc11b3b       c69fa2e9cbf5f                                                                                          3 minutes ago       Running             coredns                     2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db                                                                                          3 minutes ago       Running             storage-provisioner         4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3                                                                                          3 minutes ago       Running             kube-proxy                  3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575                                                                                          4 minutes ago       Running             kube-scheduler              3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06                                                                                          4 minutes ago       Running             etcd                        3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d                                                                                          4 minutes ago       Running             kube-controller-manager     3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100                                                                                          4 minutes ago       Running             kube-apiserver              0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d                                                                                          4 minutes ago       Exited              kube-controller-manager     2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575                                                                                          4 minutes ago       Exited              kube-scheduler              2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3                                                                                          4 minutes ago       Exited              kube-proxy                  2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06                                                                                          4 minutes ago       Exited              etcd                        2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db                                                                                          4 minutes ago       Created             storage-provisioner         3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f                                                                                          4 minutes ago       Exited              coredns                     1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:53:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m58s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m4s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-n42l6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ft6nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m57s                kube-proxy       
	  Normal   Starting                 3m59s                kube-proxy       
	  Normal   Starting                 4m36s                kube-proxy       
	  Normal   NodeHasSufficientPID     5m3s                 kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m3s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m3s                 kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                 kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m3s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m59s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           4m32s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m3s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4m3s (x7 over 4m3s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m57s                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	[Sep16 10:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 9c f5 dc 07 74 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:53:53 up 36 min,  0 users,  load average: 0.14, 0.32, 0.26
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:50:15.353595       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:50:15.383572       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:50:15.462425       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.78.248"}
	I0916 10:50:15.474116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.44.150"}
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	I0916 10:50:15.401924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.824932ms"
	E0916 10:50:15.401965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.228174ms"
	E0916 10:50:15.406402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.685351ms"
	E0916 10:50:15.406718       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.684096ms"
	E0916 10:50:15.412650       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.953128ms"
	E0916 10:50:15.413009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.425332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.293667ms"
	I0916 10:50:15.431862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.471098ms"
	I0916 10:50:15.431951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.442µs"
	I0916 10:50:15.435557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.685µs"
	I0916 10:50:15.444643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.47397ms"
	I0916 10:50:15.450160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.47806ms"
	I0916 10:50:15.450257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.466µs"
	I0916 10:50:15.455986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.936µs"
	I0916 10:50:21.485772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.612481ms"
	I0916 10:50:21.485883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.788µs"
	I0916 10:50:21.496496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.060642ms"
	I0916 10:50:21.496566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.635µs"
	I0916 10:50:24.392883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:50:55.024723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:53:54 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: E0916 10:50:15.424498   46464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.424567   46464 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531002   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c77012c-f486-455a-948c-0a12d040e2d0-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531047   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp2t\" (UniqueName: \"kubernetes.io/projected/0b84536b-e981-44f8-9021-6593d46481c1-kube-api-access-nzp2t\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531072   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4t6\" (UniqueName: \"kubernetes.io/projected/2c77012c-f486-455a-948c-0a12d040e2d0-kube-api-access-tz4t6\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531091   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0b84536b-e981-44f8-9021-6593d46481c1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.638442   46464 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:50:21 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:21.489031   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6" podStartSLOduration=1.288104906 podStartE2EDuration="6.489005142s" podCreationTimestamp="2024-09-16 10:50:15 +0000 UTC" firstStartedPulling="2024-09-16 10:50:16.00867893 +0000 UTC m=+26.037713028" lastFinishedPulling="2024-09-16 10:50:21.20957917 +0000 UTC m=+31.238613264" observedRunningTime="2024-09-16 10:50:21.4795614 +0000 UTC m=+31.508595511" watchObservedRunningTime="2024-09-16 10:50:21.489005142 +0000 UTC m=+31.518039250"
	Sep 16 10:50:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:50.228146   46464 scope.go:117] "RemoveContainer" containerID="67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74"
	
	
	==> kubernetes-dashboard [b7dca8e1a741] <==
	2024/09/16 10:50:20 Using namespace: kubernetes-dashboard
	2024/09/16 10:50:20 Using in-cluster config to connect to apiserver
	2024/09/16 10:50:20 Using secret token for csrf signing
	2024/09/16 10:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:50:20 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:50:20 Generating JWE encryption key
	2024/09/16 10:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:50:21 Initializing JWE encryption key from synchronized object
	2024/09/16 10:50:21 Creating in-cluster Sidecar client
	2024/09/16 10:50:21 Serving insecurely on HTTP port: 9090
	2024/09/16 10:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 10:50:51 Successful request to sidecar
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (431.261µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/MySQL (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": fork/exec /usr/local/bin/kubectl: exec format error (424.062µs)
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context minikube get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| kubectl        | minikube kubectl -- --context                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|                | minikube get pods                                                        |          |         |         |                     |                     |
	| start          | -p minikube                                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|                | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |          |         |         |                     |                     |
	|                | --wait=all                                                               |          |         |         |                     |                     |
	| config         | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config         | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| config         | minikube config set cpus 2                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config         | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config         | minikube config unset cpus                                               | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| config         | minikube config get cpus                                                 | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	| dashboard      | --url --port 36195 -p minikube                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | --alsologtostderr -v=1                                                   |          |         |         |                     |                     |
	| start          | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|                | --driver=none                                                            |          |         |         |                     |                     |
	|                | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start          | -p minikube --dry-run                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | --alsologtostderr                                                        |          |         |         |                     |                     |
	|                | -v=1 --driver=none                                                       |          |         |         |                     |                     |
	|                | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| start          | -p minikube --dry-run --memory                                           | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | 250MB --alsologtostderr                                                  |          |         |         |                     |                     |
	|                | --driver=none                                                            |          |         |         |                     |                     |
	|                | --bootstrapper=kubeadm                                                   |          |         |         |                     |                     |
	| service        | minikube service list                                                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service        | minikube service list -o json                                            | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| service        | minikube service                                                         | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | --namespace=default --https                                              |          |         |         |                     |                     |
	|                | --url hello-node                                                         |          |         |         |                     |                     |
	| service        | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | --url --format={{.IP}}                                                   |          |         |         |                     |                     |
	| service        | minikube service hello-node                                              | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC |                     |
	|                | --url                                                                    |          |         |         |                     |                     |
	| addons         | minikube addons list                                                     | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| addons         | minikube addons list -o json                                             | minikube | jenkins | v1.34.0 | 16 Sep 24 10:50 UTC | 16 Sep 24 10:50 UTC |
	| tunnel         | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|                | --alsologtostderr                                                        |          |         |         |                     |                     |
	| tunnel         | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|                | --alsologtostderr                                                        |          |         |         |                     |                     |
	| tunnel         | minikube tunnel                                                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:52 UTC |                     |
	|                | --alsologtostderr                                                        |          |         |         |                     |                     |
	| update-context | minikube update-context                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:54 UTC |
	|                | --alsologtostderr -v=2                                                   |          |         |         |                     |                     |
	| update-context | minikube update-context                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:54 UTC |
	|                | --alsologtostderr -v=2                                                   |          |         |         |                     |                     |
	| update-context | minikube update-context                                                  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:53 UTC | 16 Sep 24 10:53 UTC |
	|                | --alsologtostderr -v=2                                                   |          |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:50:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:50:17.013809   49522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:17.013928   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.013940   49522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:17.013947   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.014283   49522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:17.014884   49522 out.go:352] Setting JSON to false
	I0916 10:50:17.016300   49522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:17.016418   49522 start.go:139] virtualization: kvm guest
	I0916 10:50:17.018914   49522 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:17.020443   49522 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:17.020481   49522 notify.go:220] Checking for updates...
	I0916 10:50:17.020483   49522 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:17.021852   49522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:17.023292   49522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:17.024682   49522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:17.025975   49522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:17.027472   49522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:17.029411   49522 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:17.029834   49522 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:17.032099   49522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:17.042311   49522 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 10:50:17.043885   49522 start.go:297] selected driver: none
	I0916 10:50:17.043900   49522 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:17.044037   49522 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:17.044058   49522 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:17.044345   49522 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 10:50:17.046514   49522 out.go:201] 
	W0916 10:50:17.047718   49522 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:50:17.049056   49522 out.go:201] 
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:54:08 UTC. --
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.328419724Z" level=info msg="ignoring event" container=4cc6aa8bc7d5e9b6c23b0ffef1d7dd33c125694c09d123e93105211110fc35d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.342215724Z" level=info msg="ignoring event" container=7b5dd454fcc4f4ca4ab258f0f3f3f6b009d55ed512e77ba61d248f8d98c06cb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.355706853Z" level=info msg="ignoring event" container=733fde545b9700e451efe7302c3fab774b29f95a4e2a4c266185a1f6906b6305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.362421026Z" level=info msg="ignoring event" container=d36cca85a0cf0e08b86d5f561cee6dadd426b71f565584ca300ff922a44b6af9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:48 ubuntu-20-agent-2 dockerd[44786]: time="2024-09-16T10:49:48.419908685Z" level=info msg="ignoring event" container=3c1686a3f081659b27d32842de1f945b93fd57c4bda45349659678d8dbd8152d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"317985ddf47a1776e5dffdcabf0b6063a7be6dd5e1b0978b9cd1e22714e83916\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ad166eb13016a9855eec2083bee853825fd8cad580446d4e46637c49394bb10e\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59ae2583e1f56461dd5c09215b8dedf9f472b3e46e4bac225875b3dba7cc7434\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"cb842334bb4ef4dbfc1289eda9d31364a70d3f6237c8081bbf8ffb19a50404ce\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"33693827aa1af634593b8fe1bf32ef602c24c24b9b2b084a7cf0811d3e52d0a4\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"75baf2b9ae9f6924e7f354be0debcdc1254644d58d79381d5ce09b167a3ac872\". Proceed without further sandbox information."
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87e5de0471ea69fb8e34c546e4892215dd0cf17c295ac4ade0e5f68165e028e4/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857b5574b5ed24fd458b7d9caeb741273b94cafa380f363c834dc741c67be6bc/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2740906d206d0180f54e8558d2448e37481489a23df6bfd12097d07aa61a198/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:50 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b5e4a7c1dc72c399487814945c2fe454277fa0ed099902c0983e1d7bf97645f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:51 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9tmvq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"01deb4e9cb0cef579e6cf5428a2ec67138f88f9aa59914f7293974bf58be4113\""
	Sep 16 10:49:53 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f3456b9ca9b8f7ddd786697c6f8a2fd71715f0ee116f88138b76e67c24ceb3c/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e79acef8fbbd7a1f8cc65da627523ab9ab48441a2fe2f69d88f9fc35aba2cb2/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:49:54 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:49:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f04dd1758d06d211cc71418383ba2aa440d9092c700cd0c206655578bf0b049f/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/396cdfa7884cc327569a77054f27020715549649b6a7fd3b233783d296023cb9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:15 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206254bc5172ca5de6cd75834006383ffaea64ecd25d9953cb741a27628a5a9f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 10:50:19 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:19Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 10:50:21 ubuntu-20-agent-2 cri-dockerd[45141]: time="2024-09-16T10:50:21Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f3dad1361e62c       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 minutes ago       Running             dashboard-metrics-scraper   0                   206254bc5172c       dashboard-metrics-scraper-c5db448b4-n42l6
	b7dca8e1a7411       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         3 minutes ago       Running             kubernetes-dashboard        0                   396cdfa7884cc       kubernetes-dashboard-695b96c756-ft6nz
	76cbcdfc11b3b       c69fa2e9cbf5f                                                                                          4 minutes ago       Running             coredns                     2                   f04dd1758d06d       coredns-7c65d6cfc9-9tmvq
	088c924c78362       6e38f40d628db                                                                                          4 minutes ago       Running             storage-provisioner         4                   3e79acef8fbbd       storage-provisioner
	25e33a97327c4       60c005f310ff3                                                                                          4 minutes ago       Running             kube-proxy                  3                   4f3456b9ca9b8       kube-proxy-lt5f5
	9db9497d6e3b9       9aa1fad941575                                                                                          4 minutes ago       Running             kube-scheduler              3                   5b5e4a7c1dc72       kube-scheduler-ubuntu-20-agent-2
	88111361538ed       2e96e5913fc06                                                                                          4 minutes ago       Running             etcd                        3                   d2740906d206d       etcd-ubuntu-20-agent-2
	7bedc882faf66       175ffd71cce3d                                                                                          4 minutes ago       Running             kube-controller-manager     3                   857b5574b5ed2       kube-controller-manager-ubuntu-20-agent-2
	46d889fefcb7a       6bab7719df100                                                                                          4 minutes ago       Running             kube-apiserver              0                   87e5de0471ea6       kube-apiserver-ubuntu-20-agent-2
	4c8dc9f7334c2       175ffd71cce3d                                                                                          4 minutes ago       Exited              kube-controller-manager     2                   4045e763ce4dd       kube-controller-manager-ubuntu-20-agent-2
	3c1686a3f0816       9aa1fad941575                                                                                          4 minutes ago       Exited              kube-scheduler              2                   733fde545b970       kube-scheduler-ubuntu-20-agent-2
	d36cca85a0cf0       60c005f310ff3                                                                                          4 minutes ago       Exited              kube-proxy                  2                   4cc6aa8bc7d5e       kube-proxy-lt5f5
	89edf012e73d5       2e96e5913fc06                                                                                          4 minutes ago       Exited              etcd                        2                   7b5dd454fcc4f       etcd-ubuntu-20-agent-2
	b80696d65d3f0       6e38f40d628db                                                                                          4 minutes ago       Created             storage-provisioner         3                   b51e183b7b46c       storage-provisioner
	a45299c063bb1       c69fa2e9cbf5f                                                                                          4 minutes ago       Exited              coredns                     1                   6af15c63a0094       coredns-7c65d6cfc9-9tmvq
	
	
	==> coredns [76cbcdfc11b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58241 - 18724 "HINFO IN 6119160872083283358.4362415468974086659. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018519672s
	
	
	==> coredns [a45299c063bb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58211 - 33951 "HINFO IN 4546451134697352399.8219640238670837906. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015544508s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_48_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:54:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:55 +0000   Mon, 16 Sep 2024 10:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9tmvq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m13s
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m19s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-lt5f5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-n42l6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-ft6nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m12s                  kube-proxy       
	  Normal   Starting                 4m14s                  kube-proxy       
	  Normal   Starting                 4m50s                  kube-proxy       
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m14s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   RegisteredNode           4m47s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	  Normal   NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4m18s (x7 over 4m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m12s                  node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 82 a2 3b c6 36 08 06
	[  +0.152508] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b1 94 c5 c8 0e 08 06
	[  +0.074505] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 4b 73 68 0b 08 06
	[ +35.180386] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae ac 3f b4 03 05 08 06
	[  +0.034138] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ee dd ef 56 4c 08 06
	[ +12.606141] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 36 1c 2e 2f 5b 08 06
	[  +0.000744] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 52 1f f0 9e 38 08 06
	[Sep16 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fb a1 8f a9 54 08 06
	[Sep16 10:48] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	[Sep16 10:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 9c f5 dc 07 74 08 06
	
	
	==> etcd [88111361538e] <==
	{"level":"info","ts":"2024-09-16T10:49:50.871606Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:50.871736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:50.871929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:50.874219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:50.874741Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874798Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:50.874869Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:50.874900Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:52.660785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:52.660888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.660909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 4"}
	{"level":"info","ts":"2024-09-16T10:49:52.662104Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:52.662126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:49:52.662313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.662344Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:49:52.663195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:52.663955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:49:52.664047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [89edf012e73d] <==
	{"level":"info","ts":"2024-09-16T10:49:47.744523Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-16T10:49:47.753231Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","commit-index":515}
	{"level":"info","ts":"2024-09-16T10:49:47.754041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-16T10:49:47.754098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became follower at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:47.754122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b435b960bec7c3c [peers: [], term: 3, commit: 515, applied: 0, lastindex: 515, lastterm: 3]"}
	{"level":"warn","ts":"2024-09-16T10:49:47.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-16T10:49:47.759048Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":489}
	{"level":"info","ts":"2024-09-16T10:49:47.762168Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-16T10:49:47.763923Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b435b960bec7c3c","timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:49:47.764228Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b435b960bec7c3c"}
	{"level":"info","ts":"2024-09-16T10:49:47.764268Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b435b960bec7c3c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-16T10:49:47.764903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:49:47.766996Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T10:49:47.767044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:49:47.767348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:49:47.767440Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:47.767550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:49:47.767926Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:49:47.768180Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:49:47.768234Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:49:47.768334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:49:47.768351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	
	
	==> kernel <==
	 10:54:08 up 36 min,  0 users,  load average: 0.11, 0.30, 0.25
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [46d889fefcb7] <==
	I0916 10:49:53.575283       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:49:53.575301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:49:53.575465       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:49:53.575408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:49:53.580633       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:49:53.580673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:49:53.596395       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:49:53.596433       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:49:53.596442       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:49:53.596449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:49:53.596455       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:49:53.599321       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:49:54.478124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:49:55.207989       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:49:55.217830       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:49:55.248987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:49:55.269731       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:49:55.276367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:49:57.099450       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:49:57.249320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:50:15.353595       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:50:15.383572       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:50:15.462425       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.78.248"}
	I0916 10:50:15.474116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.44.150"}
	
	
	==> kube-controller-manager [4c8dc9f7334c] <==
	I0916 10:49:48.173517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [7bedc882faf6] <==
	I0916 10:49:57.813980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.84µs"
	I0916 10:50:15.401924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.824932ms"
	E0916 10:50:15.401965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.228174ms"
	E0916 10:50:15.406402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.406693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.685351ms"
	E0916 10:50:15.406718       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.684096ms"
	E0916 10:50:15.412650       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.412986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.953128ms"
	E0916 10:50:15.413009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:50:15.425332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.293667ms"
	I0916 10:50:15.431862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.471098ms"
	I0916 10:50:15.431951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="54.442µs"
	I0916 10:50:15.435557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.685µs"
	I0916 10:50:15.444643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.47397ms"
	I0916 10:50:15.450160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.47806ms"
	I0916 10:50:15.450257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.466µs"
	I0916 10:50:15.455986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.936µs"
	I0916 10:50:21.485772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.612481ms"
	I0916 10:50:21.485883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.788µs"
	I0916 10:50:21.496496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.060642ms"
	I0916 10:50:21.496566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.635µs"
	I0916 10:50:24.392883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	I0916 10:50:55.024723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-2"
	
	
	==> kube-proxy [25e33a97327c] <==
	I0916 10:49:54.681567       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:49:54.797102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0916 10:49:54.797163       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:49:54.816103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:49:54.816152       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:49:54.817801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:49:54.818176       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:49:54.818215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:54.819244       1 config.go:199] "Starting service config controller"
	I0916 10:49:54.819298       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:49:54.819317       1 config.go:328] "Starting node config controller"
	I0916 10:49:54.819328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:49:54.819356       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:49:54.819397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:49:54.919504       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:49:54.919540       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:49:54.919510       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d36cca85a0cf] <==
	I0916 10:49:47.834945       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:49:47.965482       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/ubuntu-20-agent-2\": dial tcp 10.138.0.48:8441: connect: connection refused"
	
	
	==> kube-scheduler [3c1686a3f081] <==
	I0916 10:49:48.153578       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:48.393574       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://10.138.0.48:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.138.0.48:8441: connect: connection refused
	W0916 10:49:48.393620       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:48.393632       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:48.399434       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:48.399458       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0916 10:49:48.399475       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0916 10:49:48.401582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:48.401630       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:49:48.401653       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0916 10:49:48.401826       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:48.401867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:48.401888       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 10:49:48.401944       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0916 10:49:48.401999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9db9497d6e3b] <==
	I0916 10:49:51.325271       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:49:53.502430       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:49:53.502467       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0916 10:49:53.502481       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:49:53.502490       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:49:53.525152       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:49:53.525177       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:49:53.527126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:49:53.527171       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:49:53.527325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:49:53.527440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:49:53.627582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:54:09 UTC. --
	Sep 16 10:49:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:50.832845   46464 kubelet_node_status.go:72] "Attempting to register node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600201   46464 kubelet_node_status.go:111] "Node was previously registered" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600319   46464 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.600358   46464 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:49:53 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:53.601084   46464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.038292   46464 apiserver.go:52] "Watching apiserver"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.041192   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.043622   46464 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.051286   46464 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.065037   46464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ababb2af12b481e591ddfe93ae3b1f" path="/var/lib/kubelet/pods/a5ababb2af12b481e591ddfe93ae3b1f/volumes"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.093533   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=0.093511983 podStartE2EDuration="93.511983ms" podCreationTimestamp="2024-09-16 10:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:49:54.0850596 +0000 UTC m=+4.114093707" watchObservedRunningTime="2024-09-16 10:49:54.093511983 +0000 UTC m=+4.122546090"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100225   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-xtables-lock\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100303   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e01c31f-c798-45c0-98a2-ee94c3b9d631-lib-modules\") pod \"kube-proxy-lt5f5\" (UID: \"2e01c31f-c798-45c0-98a2-ee94c3b9d631\") " pod="kube-system/kube-proxy-lt5f5"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.100365   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfe4a726-3764-4daf-a322-8f33ae3528f7-tmp\") pod \"storage-provisioner\" (UID: \"dfe4a726-3764-4daf-a322-8f33ae3528f7\") " pod="kube-system/storage-provisioner"
	Sep 16 10:49:54 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:54.211205   46464 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podUID="d9fac362-fee0-4ee4-9a06-22b343085d2d"
	Sep 16 10:49:57 ubuntu-20-agent-2 kubelet[46464]: I0916 10:49:57.796237   46464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: E0916 10:50:15.424498   46464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.424567   46464 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ababb2af12b481e591ddfe93ae3b1f" containerName="kube-apiserver"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531002   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c77012c-f486-455a-948c-0a12d040e2d0-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531047   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp2t\" (UniqueName: \"kubernetes.io/projected/0b84536b-e981-44f8-9021-6593d46481c1-kube-api-access-nzp2t\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531072   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4t6\" (UniqueName: \"kubernetes.io/projected/2c77012c-f486-455a-948c-0a12d040e2d0-kube-api-access-tz4t6\") pod \"kubernetes-dashboard-695b96c756-ft6nz\" (UID: \"2c77012c-f486-455a-948c-0a12d040e2d0\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-ft6nz"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.531091   46464 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0b84536b-e981-44f8-9021-6593d46481c1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-n42l6\" (UID: \"0b84536b-e981-44f8-9021-6593d46481c1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6"
	Sep 16 10:50:15 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:15.638442   46464 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:50:21 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:21.489031   46464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-n42l6" podStartSLOduration=1.288104906 podStartE2EDuration="6.489005142s" podCreationTimestamp="2024-09-16 10:50:15 +0000 UTC" firstStartedPulling="2024-09-16 10:50:16.00867893 +0000 UTC m=+26.037713028" lastFinishedPulling="2024-09-16 10:50:21.20957917 +0000 UTC m=+31.238613264" observedRunningTime="2024-09-16 10:50:21.4795614 +0000 UTC m=+31.508595511" watchObservedRunningTime="2024-09-16 10:50:21.489005142 +0000 UTC m=+31.518039250"
	Sep 16 10:50:50 ubuntu-20-agent-2 kubelet[46464]: I0916 10:50:50.228146   46464 scope.go:117] "RemoveContainer" containerID="67e355cfcbda0b8f8cbbef59d43583d5570387eb8f3650ac546b1c8e807ddd74"
	
	
	==> kubernetes-dashboard [b7dca8e1a741] <==
	2024/09/16 10:50:20 Using namespace: kubernetes-dashboard
	2024/09/16 10:50:20 Using in-cluster config to connect to apiserver
	2024/09/16 10:50:20 Using secret token for csrf signing
	2024/09/16 10:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:50:20 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:50:20 Generating JWE encryption key
	2024/09/16 10:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:50:21 Initializing JWE encryption key from synchronized object
	2024/09/16 10:50:21 Creating in-cluster Sidecar client
	2024/09/16 10:50:21 Serving insecurely on HTTP port: 9090
	2024/09/16 10:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 10:50:51 Successful request to sidecar
	
	
	==> storage-provisioner [088c924c7836] <==
	I0916 10:49:54.673228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:49:54.686267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:49:54.686349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:50:12.083437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:50:12.083563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"741f2d64-542e-41ba-a831-0f0a3ad64a15", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77 became leader
	I0916 10:50:12.083591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	I0916 10:50:12.184444       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e977942a-b3a8-421e-a292-c6da5b2bbb77!
	
	
	==> storage-provisioner [b80696d65d3f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (500.276µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/NodeLabels (1.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: exit status 100 (2.381000665s)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on user configuration
	* Starting "minikube" primary control-plane node in "minikube" cluster
	* Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	* OS release is Ubuntu 20.04.6 LTS
	* Preparing Kubernetes v1.20.0 on Docker 27.2.1 ...
	  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:58:45.740771   82222 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:58:45.740878   82222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:45.740887   82222 out.go:358] Setting ErrFile to fd 2...
	I0916 10:58:45.740892   82222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:45.741078   82222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:58:45.741612   82222 out.go:352] Setting JSON to false
	I0916 10:58:45.742578   82222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2477,"bootTime":1726481849,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:58:45.742667   82222 start.go:139] virtualization: kvm guest
	I0916 10:58:45.745045   82222 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:58:45.746481   82222 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:58:45.746515   82222 notify.go:220] Checking for updates...
	I0916 10:58:45.746533   82222 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:58:45.747825   82222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:58:45.749126   82222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:58:45.750423   82222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:58:45.751725   82222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:58:45.752936   82222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:58:45.754269   82222 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:58:45.764698   82222 out.go:177] * Using the none driver based on user configuration
	I0916 10:58:45.765845   82222 start.go:297] selected driver: none
	I0916 10:58:45.765862   82222 start.go:901] validating driver "none" against <nil>
	I0916 10:58:45.765876   82222 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:58:45.765912   82222 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:58:45.766390   82222 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0916 10:58:45.767271   82222 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:58:45.767595   82222 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:58:45.767632   82222 cni.go:84] Creating CNI manager for ""
	I0916 10:58:45.767720   82222 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:58:45.767788   82222 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:45.769947   82222 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:58:45.771338   82222 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:58:45.771375   82222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:45.771550   82222 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:58:45.771597   82222 start.go:364] duration metric: took 28.935µs to acquireMachinesLock for "minikube"
	I0916 10:58:45.771614   82222 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:58:45.771690   82222 start.go:125] createHost starting for "" (driver="none")
	I0916 10:58:45.773079   82222 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 10:58:45.774274   82222 exec_runner.go:51] Run: systemctl --version
	I0916 10:58:45.776659   82222 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 10:58:45.776697   82222 client.go:168] LocalClient.Create starting
	I0916 10:58:45.776752   82222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem
	I0916 10:58:45.776786   82222 main.go:141] libmachine: Decoding PEM data...
	I0916 10:58:45.776813   82222 main.go:141] libmachine: Parsing certificate...
	I0916 10:58:45.776865   82222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem
	I0916 10:58:45.776892   82222 main.go:141] libmachine: Decoding PEM data...
	I0916 10:58:45.776914   82222 main.go:141] libmachine: Parsing certificate...
	I0916 10:58:45.777314   82222 client.go:171] duration metric: took 608.04µs to LocalClient.Create
	I0916 10:58:45.777339   82222 start.go:167] duration metric: took 681.055µs to libmachine.API.Create "minikube"
	I0916 10:58:45.777347   82222 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:58:45.777356   82222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:45.777395   82222 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:45.786615   82222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:45.786642   82222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:45.786655   82222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:45.788349   82222 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:58:45.789422   82222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:58:45.789476   82222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:58:45.789552   82222 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:58:45.789640   82222 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:45.797902   82222 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:58:45.797922   82222 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:58:45.797966   82222 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:58:45.805208   82222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:58:45.805373   82222 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1784004691 /etc/ssl/certs/110572.pem
	I0916 10:58:45.813202   82222 start.go:296] duration metric: took 35.84543ms for postStartSetup
	I0916 10:58:45.813793   82222 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:58:45.813970   82222 start.go:128] duration metric: took 42.271043ms to createHost
	I0916 10:58:45.813984   82222 start.go:83] releasing machines lock for "minikube", held for 42.376692ms
	I0916 10:58:45.814320   82222 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:45.814423   82222 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:58:45.816185   82222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:58:45.816245   82222 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 10:58:45.824407   82222 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 10:58:45.832129   82222 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0916 10:58:45.832160   82222 start.go:495] detecting cgroup driver to use...
	I0916 10:58:45.832205   82222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:45.832310   82222 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:45.848938   82222 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0916 10:58:45.857393   82222 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:58:45.866185   82222 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:58:45.866232   82222 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:58:45.875351   82222 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:45.884593   82222 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:58:45.894647   82222 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:45.903670   82222 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:45.911636   82222 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:58:45.920724   82222 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:45.927823   82222 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:45.934710   82222 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:46.143917   82222 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:58:46.210472   82222 start.go:495] detecting cgroup driver to use...
	I0916 10:58:46.210519   82222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:46.210631   82222 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:46.231183   82222 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:58:46.232146   82222 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:58:46.239700   82222 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:58:46.239717   82222 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:46.239753   82222 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:46.247062   82222 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 10:58:46.247204   82222 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3902480192 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:46.255854   82222 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:58:46.465477   82222 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:58:46.689643   82222 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:58:46.689831   82222 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:58:46.689849   82222 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:58:46.689908   82222 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:58:46.698460   82222 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:58:46.698602   82222 exec_runner.go:51] Run: sudo cp -a /tmp/minikube921523275 /etc/docker/daemon.json
	I0916 10:58:46.706429   82222 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:46.907897   82222 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:58:47.223548   82222 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:58:47.243830   82222 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:58:47.265822   82222 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.2.1 ...
	I0916 10:58:47.265886   82222 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:47.268515   82222 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:58:47.269904   82222 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:58:47.270024   82222 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:58:47.270036   82222 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.20.0 docker true true} ...
	I0916 10:58:47.270127   82222 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:58:47.270169   82222 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:58:47.315936   82222 cni.go:84] Creating CNI manager for ""
	I0916 10:58:47.315960   82222 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:58:47.315970   82222 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:58:47.315992   82222 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 10:58:47.316170   82222 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:58:47.316241   82222 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 10:58:47.324805   82222 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.20.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.20.0': No such file or directory
	
	Initiating transfer...
	I0916 10:58:47.324864   82222 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.20.0
	I0916 10:58:47.333841   82222 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256
	I0916 10:58:47.333858   82222 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256
	I0916 10:58:47.333901   82222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubeadm --> /var/lib/minikube/binaries/v1.20.0/kubeadm (39219200 bytes)
	I0916 10:58:47.333901   82222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubectl --> /var/lib/minikube/binaries/v1.20.0/kubectl (40230912 bytes)
	I0916 10:58:47.333844   82222 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet
	I0916 10:58:47.372090   82222 exec_runner.go:51] Run: sudo cp -a /tmp/minikube654309643 /var/lib/minikube/binaries/v1.20.0/kubeadm
	I0916 10:58:47.374655   82222 exec_runner.go:51] Run: sudo cp -a /tmp/minikube999538379 /var/lib/minikube/binaries/v1.20.0/kubectl
	I0916 10:58:48.060280   82222 out.go:201] 
	W0916 10:58:48.061587   82222 out.go:270] X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: update primary control-plane node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200] Decompressors:map[bz2:0xc0006d4e20 gz:0xc0006d4e28 tar:0xc0006d4dc0 tar.bz2:0xc0006d4de0 tar.gz:0xc0006d4df0 tar.xz:0xc0006d4e00 tar.zst:0xc0006d4e10 tbz2:0xc0006d4de0 tgz:0xc0006d4df0 txz:0xc0006d4e00 tzst:0xc0006d4e10 xz:0xc0006d4e30 zip:0xc0006d4e40 zst:0xc0006d4e38] Getters:map[file:0
xc000595690 http:0xc000a3c500 https:0xc000a3c550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: bad response code: 403
	X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: update primary control-plane node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200] Decompressors:map[bz2:0xc0006d4e20 gz:0xc0006d4e28 tar:0xc0006d4dc0 tar.bz2:0xc0006d4de0 tar.gz:0xc0006d4df0 tar.xz:0xc0006d4e00 tar.zst:0xc0006d4e10 tbz2:0xc0006d4de0 tgz:0xc0006d4df0 txz:0xc0006d4e00 tzst:0xc0006d4e10 xz:0xc0006d4e30 zip:0xc0006d4e40 zst:0xc0006d4e38] Getters:map[file:0xc000595690 http:0xc000a3c500 https:0xc000
a3c550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: bad response code: 403
	W0916 10:58:48.061608   82222 out.go:270] * 
	* 
	W0916 10:58:48.062424   82222 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:58:48.064149   82222 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: exit status 100
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (73.534531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (13.940623086s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context minikube version --output=json: fork/exec /usr/local/bin/kubectl: exec format error (488.067µs)
version_upgrade_test.go:250: error running kubectl: fork/exec /usr/local/bin/kubectl: exec format error
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 10:59:02.179404781 +0000 UTC m=+2194.125274293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| start   | -p minikube --memory=2048      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:56 UTC |
	|         | --install-addons=false         |          |         |         |                     |                     |
	|         | --wait=all --driver=none       |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| start   | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | -v=1 --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| pause   | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | -v=5                           |          |         |         |                     |                     |
	| unpause | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | -v=5                           |          |         |         |                     |                     |
	| pause   | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | -v=5                           |          |         |         |                     |                     |
	| delete  | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | -v=5                           |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.26.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:57 UTC |
	|         | --vm-driver=none               |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | -v=1 --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.26.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:58 UTC |
	|         | --vm-driver=none               |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| stop    | minikube stop                  | minikube | jenkins | v1.26.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:58 UTC |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:58 UTC |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | -v=1 --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:58 UTC |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | -v=1 --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| stop    | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:58 UTC |
	| start   | -p minikube --memory=2200      | minikube | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:59 UTC |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | -v=1 --driver=none             |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:58:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:58:48.275807   82766 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:58:48.275933   82766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:48.275942   82766 out.go:358] Setting ErrFile to fd 2...
	I0916 10:58:48.275946   82766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:48.276106   82766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:58:48.276588   82766 out.go:352] Setting JSON to false
	I0916 10:58:48.277481   82766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2479,"bootTime":1726481849,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:58:48.277575   82766 start.go:139] virtualization: kvm guest
	I0916 10:58:48.279590   82766 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:58:48.280901   82766 notify.go:220] Checking for updates...
	W0916 10:58:48.280899   82766 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:58:48.280940   82766 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:58:48.282322   82766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:58:48.283691   82766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:58:48.285238   82766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:58:48.286441   82766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:58:48.287630   82766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:58:48.289089   82766 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 10:58:48.289382   82766 exec_runner.go:51] Run: systemctl --version
	I0916 10:58:48.291962   82766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:58:48.302367   82766 out.go:177] * Using the none driver based on existing profile
	I0916 10:58:48.303670   82766 start.go:297] selected driver: none
	I0916 10:58:48.303687   82766 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:48.303820   82766 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:58:48.303849   82766 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:58:48.304173   82766 out.go:270] ! The 'none' driver does not respect the --memory flag
	W0916 10:58:48.305250   82766 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 10:58:48.305413   82766 cni.go:84] Creating CNI manager for ""
	I0916 10:58:48.305482   82766 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:58:48.305556   82766 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:48.307236   82766 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:58:48.308645   82766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:58:48.308852   82766 start.go:360] acquireMachinesLock for minikube: {Name:mk411ea64c19450b270349394398661fc1fd1151 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:58:48.308933   82766 start.go:364] duration metric: took 43.594µs to acquireMachinesLock for "minikube"
	I0916 10:58:48.308952   82766 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:58:48.308961   82766 fix.go:54] fixHost starting: 
	W0916 10:58:48.309445   82766 none.go:130] unable to get port: "minikube" does not appear in /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:58:48.309461   82766 api_server.go:166] Checking apiserver status ...
	I0916 10:58:48.309508   82766 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 10:58:48.323409   82766 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1
	stdout:
	
	stderr:
	I0916 10:58:48.323457   82766 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:58:48.335816   82766 fix.go:112] recreateIfNeeded on minikube: state=Stopped err=<nil>
	W0916 10:58:48.335840   82766 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:58:48.337697   82766 out.go:177] * Restarting existing none bare metal machine for "minikube" ...
	I0916 10:58:48.340096   82766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:58:48.340219   82766 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 10:58:48.340261   82766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:48.340295   82766 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:48.348268   82766 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:48.348290   82766 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:48.348299   82766 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:48.349924   82766 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 10:58:48.351020   82766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/addons for local assets ...
	I0916 10:58:48.351076   82766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3763/.minikube/files for local assets ...
	I0916 10:58:48.351200   82766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem -> 110572.pem in /etc/ssl/certs
	I0916 10:58:48.351310   82766 exec_runner.go:51] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:48.359525   82766 exec_runner.go:144] found /etc/ssl/certs/110572.pem, removing ...
	I0916 10:58:48.359549   82766 exec_runner.go:203] rm: /etc/ssl/certs/110572.pem
	I0916 10:58:48.359595   82766 exec_runner.go:51] Run: sudo rm -f /etc/ssl/certs/110572.pem
	I0916 10:58:48.366814   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:58:48.366979   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3587722911 /etc/ssl/certs/110572.pem
	I0916 10:58:48.375042   82766 start.go:296] duration metric: took 34.805507ms for postStartSetup
	I0916 10:58:48.375067   82766 fix.go:56] duration metric: took 66.107695ms for fixHost
	I0916 10:58:48.375075   82766 start.go:83] releasing machines lock for "minikube", held for 66.13037ms
	I0916 10:58:48.375414   82766 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:48.375495   82766 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 10:58:48.377189   82766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:58:48.377233   82766 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 10:58:48.385051   82766 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 10:58:48.394521   82766 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0916 10:58:48.394608   82766 start.go:495] detecting cgroup driver to use...
	I0916 10:58:48.394641   82766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:48.394739   82766 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:48.411359   82766 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:58:48.421748   82766 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:58:48.429936   82766 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:58:48.429988   82766 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:58:48.438509   82766 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:48.447232   82766 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:58:48.455486   82766 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:48.463766   82766 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:48.471773   82766 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:58:48.479972   82766 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:58:48.488820   82766 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:58:48.497137   82766 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:48.504274   82766 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:48.512024   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:48.727318   82766 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 10:58:48.791544   82766 start.go:495] detecting cgroup driver to use...
	I0916 10:58:48.791592   82766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:48.791706   82766 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:48.810455   82766 exec_runner.go:51] Run: which cri-dockerd
	I0916 10:58:48.811319   82766 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:58:48.818668   82766 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 10:58:48.818688   82766 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:48.818722   82766 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:48.827219   82766 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:58:48.827388   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2377151829 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 10:58:48.835401   82766 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 10:58:49.056035   82766 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 10:58:49.278445   82766 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:58:49.278588   82766 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 10:58:49.278600   82766 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 10:58:49.278634   82766 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 10:58:49.286734   82766 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:58:49.286875   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2780628422 /etc/docker/daemon.json
	I0916 10:58:49.295350   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:49.504611   82766 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 10:58:49.969642   82766 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:58:49.980556   82766 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 10:58:49.996180   82766 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:58:50.006672   82766 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:58:50.214681   82766 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 10:58:50.421150   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:50.636457   82766 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 10:58:50.650178   82766 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:58:50.660023   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:50.872304   82766 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 10:58:50.943019   82766 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:58:50.943099   82766 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 10:58:50.944479   82766 start.go:563] Will wait 60s for crictl version
	I0916 10:58:50.944532   82766 exec_runner.go:51] Run: which crictl
	I0916 10:58:50.945422   82766 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 10:58:50.973862   82766 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:58:50.973923   82766 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:58:50.995470   82766 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:58:51.020741   82766 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:58:51.020808   82766 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:51.023483   82766 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 10:58:51.024645   82766 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:58:51.024780   82766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:58:51.024792   82766 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0916 10:58:51.024874   82766 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 10:58:51.024920   82766 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 10:58:51.072348   82766 cni.go:84] Creating CNI manager for ""
	I0916 10:58:51.072376   82766 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:58:51.072386   82766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:58:51.072411   82766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:58:51.072587   82766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:58:51.072651   82766 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:51.082275   82766 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 10:58:51.082341   82766 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:51.091686   82766 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 10:58:51.091715   82766 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 10:58:51.091728   82766 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:58:51.091757   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 10:58:51.091843   82766 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 10:58:51.091895   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 10:58:51.104846   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 10:58:51.142089   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube986896736 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 10:58:51.144086   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1570956874 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 10:58:51.175029   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3564510244 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 10:58:51.240688   82766 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:58:51.249354   82766 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 10:58:51.249371   82766 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:58:51.249409   82766 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:58:51.256341   82766 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 10:58:51.256460   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube366322617 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 10:58:51.264847   82766 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 10:58:51.264869   82766 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 10:58:51.264900   82766 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 10:58:51.273129   82766 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:58:51.273270   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3332077831 /lib/systemd/system/kubelet.service
	I0916 10:58:51.280812   82766 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 10:58:51.280920   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1225286787 /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:58:51.289296   82766 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0916 10:58:51.290628   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:58:51.504312   82766 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:58:51.518880   82766 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube for IP: 10.138.0.48
	I0916 10:58:51.518907   82766 certs.go:194] generating shared ca certs ...
	I0916 10:58:51.518926   82766 certs.go:226] acquiring lock for ca certs: {Name:mk043c41e08f736aac60a186c6b5a39a44adfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:51.519053   82766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key
	I0916 10:58:51.519097   82766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key
	I0916 10:58:51.519108   82766 certs.go:256] generating profile certs ...
	I0916 10:58:51.519157   82766 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key
	I0916 10:58:51.519179   82766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 10:58:51.630905   82766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt ...
	I0916 10:58:51.630933   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt: {Name:mk3286357234cda40557f508e5029c93016f9710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:51.631074   82766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key ...
	I0916 10:58:51.631084   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key: {Name:mk20783244a73e90e04cdbc506e3032ad365b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:51.631144   82766 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0916 10:58:51.631158   82766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0916 10:58:51.853450   82766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0916 10:58:51.853486   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkaaeb0c21c9904b79d53b2917cee631d41c921c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:51.853640   82766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0916 10:58:51.853656   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06e5d9a924eb3ef87fa2b5fa51a9f83a4abb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:51.853751   82766 certs.go:381] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt
	I0916 10:58:51.853878   82766 certs.go:385] copying /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key
	I0916 10:58:51.853962   82766 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key
	I0916 10:58:51.853982   82766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 10:58:52.097975   82766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 10:58:52.098010   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt: {Name:mkffd4795ad0708e29c9e63a9f73c6e601584e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:52.098155   82766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key ...
	I0916 10:58:52.098170   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key: {Name:mk1595e9621083c2801a11be8a4c6d2c56ebeb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:52.098365   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem (1338 bytes)
	W0916 10:58:52.098414   82766 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057_empty.pem, impossibly tiny 0 bytes
	I0916 10:58:52.098427   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:58:52.098471   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:58:52.098507   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:58:52.098540   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/key.pem (1679 bytes)
	I0916 10:58:52.098597   82766 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem (1708 bytes)
	I0916 10:58:52.099158   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:58:52.099293   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2623731347 /var/lib/minikube/certs/ca.crt
	I0916 10:58:52.108968   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:58:52.109105   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube765748639 /var/lib/minikube/certs/ca.key
	I0916 10:58:52.118464   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:58:52.118569   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000907185 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:58:52.127027   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:58:52.127124   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1849784116 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:58:52.135160   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 10:58:52.135268   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube881890656 /var/lib/minikube/certs/apiserver.crt
	I0916 10:58:52.144509   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:58:52.144645   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4035751023 /var/lib/minikube/certs/apiserver.key
	I0916 10:58:52.152582   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:58:52.152704   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube270405718 /var/lib/minikube/certs/proxy-client.crt
	I0916 10:58:52.160561   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:58:52.160698   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2959287653 /var/lib/minikube/certs/proxy-client.key
	I0916 10:58:52.169029   82766 exec_runner.go:144] found /usr/share/ca-certificates/110572.pem, removing ...
	I0916 10:58:52.169050   82766 exec_runner.go:203] rm: /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.169100   82766 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.177180   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/ssl/certs/110572.pem --> /usr/share/ca-certificates/110572.pem (1708 bytes)
	I0916 10:58:52.177317   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube955390145 /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.185375   82766 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 10:58:52.185395   82766 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.185440   82766 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.192731   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:58:52.192910   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3038758933 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.200684   82766 exec_runner.go:144] found /usr/share/ca-certificates/11057.pem, removing ...
	I0916 10:58:52.200706   82766 exec_runner.go:203] rm: /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.200755   82766 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.208893   82766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3763/.minikube/certs/11057.pem --> /usr/share/ca-certificates/11057.pem (1338 bytes)
	I0916 10:58:52.209062   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3727484357 /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.216932   82766 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:58:52.217079   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2268187402 /var/lib/minikube/kubeconfig
	I0916 10:58:52.224525   82766 exec_runner.go:51] Run: openssl version
	I0916 10:58:52.227298   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110572.pem && ln -fs /usr/share/ca-certificates/110572.pem /etc/ssl/certs/110572.pem"
	I0916 10:58:52.236193   82766 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.237544   82766 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1708 Sep 16 10:58 /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.237597   82766 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110572.pem
	I0916 10:58:52.240497   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110572.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:58:52.248358   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:58:52.256773   82766 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.258272   82766 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 10:58 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.258323   82766 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:52.261141   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:58:52.269757   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11057.pem && ln -fs /usr/share/ca-certificates/11057.pem /etc/ssl/certs/11057.pem"
	I0916 10:58:52.278614   82766 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.280022   82766 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1338 Sep 16 10:58 /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.280077   82766 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11057.pem
	I0916 10:58:52.283291   82766 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11057.pem /etc/ssl/certs/51391683.0"
	I0916 10:58:52.291482   82766 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:52.292940   82766 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:58:52.292992   82766 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:52.293153   82766 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:58:52.311316   82766 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:58:52.320242   82766 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:58:52.329155   82766 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 10:58:52.349356   82766 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:58:52.357528   82766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:58:52.357547   82766 kubeadm.go:157] found existing configuration files:
	
	I0916 10:58:52.357584   82766 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:58:52.366495   82766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:58:52.366546   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:58:52.373841   82766 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:58:52.382210   82766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:58:52.382278   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:58:52.389664   82766 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:58:52.397599   82766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:58:52.397644   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:58:52.404820   82766 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:58:52.412226   82766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:58:52.412274   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:58:52.419191   82766 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:58:52.452764   82766 kubeadm.go:310] W0916 10:58:52.452629   83698 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:58:52.453319   82766 kubeadm.go:310] W0916 10:58:52.453248   83698 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:58:52.455066   82766 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:58:52.455119   82766 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:58:52.556033   82766 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:58:52.556160   82766 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:58:52.556173   82766 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:58:52.556178   82766 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:58:52.567729   82766 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:58:52.570826   82766 out.go:235]   - Generating certificates and keys ...
	I0916 10:58:52.570873   82766 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:58:52.570888   82766 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:58:52.971666   82766 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:58:53.083493   82766 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:58:53.207225   82766 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:58:53.342322   82766 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:58:53.445244   82766 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:58:53.445372   82766 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:58:53.610665   82766 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:58:53.610826   82766 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0916 10:58:53.795179   82766 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:58:54.129834   82766 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:58:54.371472   82766 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:58:54.371703   82766 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:58:54.775284   82766 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:58:55.243440   82766 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:58:55.337988   82766 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:58:55.450646   82766 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:58:55.534936   82766 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:58:55.535462   82766 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:58:55.538636   82766 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:58:55.540867   82766 out.go:235]   - Booting up control plane ...
	I0916 10:58:55.540893   82766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:58:55.540910   82766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:58:55.541348   82766 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:58:55.562670   82766 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:58:55.567287   82766 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:58:55.567327   82766 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:58:55.791868   82766 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:58:55.791896   82766 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:58:56.293367   82766 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.482518ms
	I0916 10:58:56.293388   82766 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:59:00.294470   82766 kubeadm.go:310] [api-check] The API server is healthy after 4.001067987s
	I0916 10:59:00.304512   82766 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:59:00.315446   82766 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:59:00.334042   82766 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:59:00.334070   82766 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:59:00.343014   82766 kubeadm.go:310] [bootstrap-token] Using token: 4p0zrm.fu8tqifcumsbo1wm
	I0916 10:59:00.344331   82766 out.go:235]   - Configuring RBAC rules ...
	I0916 10:59:00.344360   82766 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:59:00.347898   82766 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:59:00.353522   82766 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:59:00.355777   82766 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:59:00.358109   82766 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:59:00.360247   82766 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:59:00.701111   82766 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:59:01.120179   82766 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:59:01.700370   82766 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:59:01.701513   82766 kubeadm.go:310] 
	I0916 10:59:01.701527   82766 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:59:01.701531   82766 kubeadm.go:310] 
	I0916 10:59:01.701536   82766 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:59:01.701539   82766 kubeadm.go:310] 
	I0916 10:59:01.701544   82766 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:59:01.701548   82766 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:59:01.701552   82766 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:59:01.701556   82766 kubeadm.go:310] 
	I0916 10:59:01.701560   82766 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:59:01.701563   82766 kubeadm.go:310] 
	I0916 10:59:01.701567   82766 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:59:01.701570   82766 kubeadm.go:310] 
	I0916 10:59:01.701574   82766 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:59:01.701577   82766 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:59:01.701581   82766 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:59:01.701584   82766 kubeadm.go:310] 
	I0916 10:59:01.701588   82766 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:59:01.701591   82766 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:59:01.701595   82766 kubeadm.go:310] 
	I0916 10:59:01.701597   82766 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4p0zrm.fu8tqifcumsbo1wm \
	I0916 10:59:01.701607   82766 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d \
	I0916 10:59:01.701614   82766 kubeadm.go:310] 	--control-plane 
	I0916 10:59:01.701618   82766 kubeadm.go:310] 
	I0916 10:59:01.701621   82766 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:59:01.701625   82766 kubeadm.go:310] 
	I0916 10:59:01.701628   82766 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4p0zrm.fu8tqifcumsbo1wm \
	I0916 10:59:01.701632   82766 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9b8537530f21498f103de5323de5f463fedacf99cc222bbc382f853bc543eb5d 
	I0916 10:59:01.704685   82766 cni.go:84] Creating CNI manager for ""
	I0916 10:59:01.704710   82766 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:59:01.706505   82766 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:59:01.707756   82766 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:59:01.718418   82766 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:59:01.718558   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1216829256 /etc/cni/net.d/1-k8s.conflist
	I0916 10:59:01.727433   82766 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:59:01.727522   82766 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:59:01.727535   82766 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_16T10_59_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 10:59:01.737052   82766 ops.go:34] apiserver oom_adj: -16
	I0916 10:59:01.795743   82766 kubeadm.go:1113] duration metric: took 68.267827ms to wait for elevateKubeSystemPrivileges
	I0916 10:59:01.807445   82766 kubeadm.go:394] duration metric: took 9.514452264s to StartCluster
	I0916 10:59:01.807481   82766 settings.go:142] acquiring lock: {Name:mk1ccb2834f5d4c02b7e4597585f037e897f4563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:59:01.807535   82766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:59:01.807987   82766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/kubeconfig: {Name:mk1f075059cdab46e790ef66b94ff3400883ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:59:01.808236   82766 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:59:01.808332   82766 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 10:59:01.808348   82766 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 10:59:01.808351   82766 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 10:59:01.808367   82766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 10:59:01.808386   82766 host.go:66] Checking if "minikube" exists ...
	I0916 10:59:01.808446   82766 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:59:01.808956   82766 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:59:01.808977   82766 api_server.go:166] Checking apiserver status ...
	I0916 10:59:01.809013   82766 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:59:01.809013   82766 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:59:01.809122   82766 api_server.go:166] Checking apiserver status ...
	I0916 10:59:01.809154   82766 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:59:01.809977   82766 out.go:177] * Configuring local host environment ...
	W0916 10:59:01.811267   82766 out.go:270] * 
	W0916 10:59:01.811286   82766 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 10:59:01.811296   82766 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 10:59:01.811304   82766 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 10:59:01.811312   82766 out.go:270] * 
	W0916 10:59:01.811366   82766 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 10:59:01.811381   82766 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 10:59:01.811390   82766 out.go:270] * 
	W0916 10:59:01.811415   82766 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 10:59:01.811422   82766 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 10:59:01.811430   82766 out.go:270] * 
	W0916 10:59:01.811436   82766 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 10:59:01.811466   82766 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:59:01.813470   82766 out.go:177] * Verifying Kubernetes components...
	I0916 10:59:01.814993   82766 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 10:59:01.824822   82766 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/84145/cgroup
	I0916 10:59:01.825368   82766 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/84145/cgroup
	I0916 10:59:01.835154   82766 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87"
	I0916 10:59:01.835218   82766 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87/freezer.state
	I0916 10:59:01.835465   82766 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87"
	I0916 10:59:01.835519   82766 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87/freezer.state
	I0916 10:59:01.846170   82766 api_server.go:204] freezer state: "THAWED"
	I0916 10:59:01.846196   82766 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:59:01.846625   82766 api_server.go:204] freezer state: "THAWED"
	I0916 10:59:01.846712   82766 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:59:01.850735   82766 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:59:01.851896   82766 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:59:01.851976   82766 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:59:01.852561   82766 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:59:01.853207   82766 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:59:01.853236   82766 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 10:59:01.853244   82766 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:59:01.853280   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:59:01.853204   82766 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 10:59:01.853416   82766 host.go:66] Checking if "minikube" exists ...
	I0916 10:59:01.854066   82766 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0916 10:59:01.854085   82766 api_server.go:166] Checking apiserver status ...
	I0916 10:59:01.854116   82766 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:59:01.862246   82766 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:59:01.862359   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube354459824 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:59:01.867687   82766 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/84145/cgroup
	I0916 10:59:01.871778   82766 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:59:01.875941   82766 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87"
	I0916 10:59:01.876024   82766 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/26d9c0b5e6ffa91ad5302330391a6e18bb58244eaddc8d1aeca84172db30ae87/freezer.state
	I0916 10:59:01.883544   82766 api_server.go:204] freezer state: "THAWED"
	I0916 10:59:01.883567   82766 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:59:01.887905   82766 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:59:01.887945   82766 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:59:01.887961   82766 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 10:59:01.887973   82766 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 10:59:01.888011   82766 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:59:01.897475   82766 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:59:01.897661   82766 exec_runner.go:51] Run: sudo cp -a /tmp/minikube720498856 /etc/kubernetes/addons/storageclass.yaml
	I0916 10:59:01.908124   82766 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:59:02.008320   82766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:59:02.008346   82766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:59:02.118527   82766 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 10:59:02.136030   82766 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.138.0.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAg
ent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:59:02.136581   82766 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:59:02.136657   82766 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:59:02.152490   82766 api_server.go:72] duration metric: took 340.987575ms to wait for apiserver process to appear ...
	I0916 10:59:02.152515   82766 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:59:02.152543   82766 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0916 10:59:02.156765   82766 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0916 10:59:02.157560   82766 api_server.go:141] control plane version: v1.31.1
	I0916 10:59:02.157581   82766 api_server.go:131] duration metric: took 5.058939ms to wait for apiserver health ...
	I0916 10:59:02.157588   82766 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:59:02.160567   82766 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:59:02.161854   82766 addons.go:510] duration metric: took 353.619645ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:59:02.163080   82766 system_pods.go:59] 5 kube-system pods found
	I0916 10:59:02.163103   82766 system_pods.go:61] "etcd-ubuntu-20-agent-2" [8a187297-cf02-41fa-ab3a-c080c8cd9d47] Pending
	I0916 10:59:02.163108   82766 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [7983d9dc-fe06-47f7-9741-7fb2f7eb5277] Pending
	I0916 10:59:02.163112   82766 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [b2f5552d-0537-4e4f-8915-7bc033f24bc8] Pending
	I0916 10:59:02.163115   82766 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [5fea5680-2937-4c5c-888f-12e63300643b] Pending
	I0916 10:59:02.163118   82766 system_pods.go:61] "storage-provisioner" [b9b72d70-0ae8-4419-bd42-fac9e4b15283] Pending
	I0916 10:59:02.163123   82766 system_pods.go:74] duration metric: took 5.530576ms to wait for pod list to return data ...
	I0916 10:59:02.163134   82766 kubeadm.go:582] duration metric: took 351.642993ms to wait for: map[apiserver:true system_pods:true]
	I0916 10:59:02.163143   82766 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:59:02.165682   82766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:02.165871   82766 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:02.165889   82766 node_conditions.go:105] duration metric: took 2.738985ms to run NodePressure ...
	I0916 10:59:02.165898   82766 start.go:241] waiting for startup goroutines ...
	I0916 10:59:02.165904   82766 start.go:246] waiting for cluster config update ...
	I0916 10:59:02.165913   82766 start.go:255] writing updated cluster config ...
	I0916 10:59:02.166116   82766 exec_runner.go:51] Run: rm -f paused
	I0916 10:59:02.169197   82766 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	E0916 10:59:02.170442   82766 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:59:02 UTC. --
	Sep 16 10:58:49 ubuntu-20-agent-2 dockerd[83010]: time="2024-09-16T10:58:49.878083624Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 16 10:58:49 ubuntu-20-agent-2 dockerd[83010]: time="2024-09-16T10:58:49.921152109Z" level=info msg="Loading containers: done."
	Sep 16 10:58:49 ubuntu-20-agent-2 dockerd[83010]: time="2024-09-16T10:58:49.934479716Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 10:58:49 ubuntu-20-agent-2 dockerd[83010]: time="2024-09-16T10:58:49.934540441Z" level=info msg="Daemon has completed initialization"
	Sep 16 10:58:49 ubuntu-20-agent-2 dockerd[83010]: time="2024-09-16T10:58:49.968180282Z" level=info msg="API listen on /run/docker.sock"
	Sep 16 10:58:49 ubuntu-20-agent-2 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:58:49 ubuntu-20-agent-2 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 16 10:58:49 ubuntu-20-agent-2 systemd[1]: cri-docker.service: Succeeded.
	Sep 16 10:58:49 ubuntu-20-agent-2 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 16 10:58:50 ubuntu-20-agent-2 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Start docker client with request timeout 0s"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Loaded network plugin cni"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 16 10:58:50 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:50Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 16 10:58:50 ubuntu-20-agent-2 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 16 10:58:57 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/42bf98e2b6cf0389b807cbf7c0d56caddc2cc64c03ac3f43188cc03e4800a805/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:58:57 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e6abc89eebda71d70c25d351a7d3ca8477c05cd7733a541a50b363a765a07fe/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:58:57 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c597537f5ef28df50eedcdf785372903fd977406b93ca73e30dc23aa64d23247/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	Sep 16 10:58:57 ubuntu-20-agent-2 cri-dockerd[83338]: time="2024-09-16T10:58:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba1b141c4bf657ac24897351c2bdbf324b4bb5432b85339fe7e441e32b6003c7/resolv.conf as [nameserver 169.254.169.254 search us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26d9c0b5e6ffa       6bab7719df100       5 seconds ago       Running             kube-apiserver            0                   ba1b141c4bf65       kube-apiserver-ubuntu-20-agent-2
	0762973c3ff32       9aa1fad941575       5 seconds ago       Running             kube-scheduler            0                   5e6abc89eebda       kube-scheduler-ubuntu-20-agent-2
	42a39639b3914       2e96e5913fc06       5 seconds ago       Running             etcd                      0                   c597537f5ef28       etcd-ubuntu-20-agent-2
	17238a8c948b2       175ffd71cce3d       5 seconds ago       Running             kube-controller-manager   0                   42bf98e2b6cf0       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_59_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:58:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:59:01 +0000   Mon, 16 Sep 2024 10:58:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:59:01 +0000   Mon, 16 Sep 2024 10:58:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:59:01 +0000   Mon, 16 Sep 2024 10:58:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:59:01 +0000   Mon, 16 Sep 2024 10:58:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    21d333ec-4d31-4efe-9267-b6cb1bcf2a42
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2s
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age              From     Message
	  ----     ------                   ----             ----     -------
	  Normal   NodeHasSufficientMemory  6s (x8 over 6s)  kubelet  Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6s (x7 over 6s)  kubelet  Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6s (x7 over 6s)  kubelet  Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   Starting                 1s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 1s               kubelet  Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  1s               kubelet  Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  1s               kubelet  Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    1s               kubelet  Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     1s               kubelet  Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 3b 08 e1 58 50 08 06
	[ +25.299353] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 19 fd 67 89 5e 08 06
	[Sep16 10:49] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee 56 d8 bc 2c 99 08 06
	[ +35.064752] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0f 34 cd af df 08 06
	[Sep16 10:50] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 9c f5 dc 07 74 08 06
	[Sep16 10:54] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 5e f7 75 6a ab 08 06
	[Sep16 10:55] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 3f a1 40 4e ac 08 06
	[  +0.053118] IPv4: martian source 10.244.0.1 from 10.244.0.36, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 71 52 c0 cf 2b 08 06
	[Sep16 10:56] IPv4: martian source 10.244.0.1 from 10.244.0.37, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 2e 72 5a 25 73 08 06
	[Sep16 10:57] IPv4: martian source 10.244.0.1 from 10.244.0.38, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 7c f2 11 d8 b1 08 06
	[ +23.401465] IPv4: martian source 10.244.0.1 from 10.244.0.39, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa ee f9 60 f3 84 08 06
	[Sep16 10:58] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 06 ad c3 35 4b 08 06
	[ +21.538640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 6a a4 d3 9b 08 08 06
	
	
	==> etcd [42a39639b391] <==
	{"level":"info","ts":"2024-09-16T10:58:57.382313Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-16T10:58:57.382368Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:58:57.382393Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:58:57.382678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c switched to configuration voters=(7729122085501172796)"}
	{"level":"info","ts":"2024-09-16T10:58:57.382746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","added-peer-id":"6b435b960bec7c3c","added-peer-peer-urls":["https://10.138.0.48:2380"]}
	{"level":"info","ts":"2024-09-16T10:58:57.972015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:58:57.972105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:58:57.972122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-16T10:58:57.972134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:57.972140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:57.972149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:57.972156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:57.973330Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:58:57.973442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:57.973509Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:57.973594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:57.973621Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:57.973414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:57.974624Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:57.974894Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:57.974975Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:57.975451Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:57.976089Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:57.976675Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-16T10:58:57.977313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:59:02 up 41 min,  0 users,  load average: 2.44, 1.15, 0.61
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [26d9c0b5e6ff] <==
	I0916 10:58:58.910073       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:58:58.910566       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:58:58.910602       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:58:58.910610       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:58:58.910619       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:58:58.910625       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:58:58.918775       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:58:58.925092       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:58:58.925114       1 policy_source.go:224] refreshing policies
	E0916 10:58:58.963670       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 10:58:59.012274       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:58:59.094365       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:59.814928       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:58:59.818980       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:58:59.819002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:59:00.229070       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:59:00.263401       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:59:00.316784       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:59:00.323169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [10.138.0.48]
	I0916 10:59:00.324166       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:59:00.328079       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:59:00.839038       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:59:01.109351       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:59:01.119196       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:59:01.130154       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [17238a8c948b] <==
	I0916 10:59:01.637859       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0916 10:59:01.637875       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0916 10:59:01.790616       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I0916 10:59:01.790704       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0916 10:59:01.790720       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0916 10:59:01.938285       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I0916 10:59:01.938349       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0916 10:59:01.938370       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0916 10:59:02.193552       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I0916 10:59:02.193621       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0916 10:59:02.193632       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0916 10:59:02.338799       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I0916 10:59:02.338894       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I0916 10:59:02.338910       1 shared_informer.go:313] Waiting for caches to sync for job
	I0916 10:59:02.487963       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I0916 10:59:02.488037       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0916 10:59:02.488049       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0916 10:59:02.638228       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0916 10:59:02.638250       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0916 10:59:02.638254       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0916 10:59:02.638269       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0916 10:59:02.638273       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0916 10:59:02.788577       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I0916 10:59:02.788672       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0916 10:59:02.788689       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	
	
	==> kube-scheduler [0762973c3ff3] <==
	W0916 10:58:58.859163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:58:58.859108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:58:58.859185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 10:58:58.859132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:58:58.859136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:58.859246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:58:58.859271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:58.859280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:58:58.859315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.717326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:58:59.717367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.749922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:58:59.749968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.823654       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:58:59.823700       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:58:59.870665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:58:59.870703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.900089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:58:59.900131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.948532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:58:59.948582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:58:59.978968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:58:59.979020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:59:00.019518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:59:00.019559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-03 06:18:09 UTC, end at Mon 2024-09-16 10:59:02 UTC. --
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.322205   84276 kubelet_node_status.go:75] "Successfully registered node" node="ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.333343   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-ca-certs\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.433788   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-flexvolume-dir\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.433875   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-kubeconfig\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.433945   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-usr-local-share-ca-certificates\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.433999   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a30c9a7effa6f4f8172b7ac23690210b-k8s-certs\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"a30c9a7effa6f4f8172b7ac23690210b\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434040   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-etc-ca-certificates\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434067   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-k8s-certs\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434090   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a30c9a7effa6f4f8172b7ac23690210b-ca-certs\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"a30c9a7effa6f4f8172b7ac23690210b\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434112   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a30c9a7effa6f4f8172b7ac23690210b-usr-share-ca-certificates\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"a30c9a7effa6f4f8172b7ac23690210b\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434160   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbff5351fb3e01bcec8c471c38698f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ubuntu-20-agent-2\" (UID: \"ccbff5351fb3e01bcec8c471c38698f0\") " pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434185   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5b137b06bdfaed6743b655439322dfe0-etcd-data\") pod \"etcd-ubuntu-20-agent-2\" (UID: \"5b137b06bdfaed6743b655439322dfe0\") " pod="kube-system/etcd-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434209   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de72559ec804c46642b9388a6a99321-kubeconfig\") pod \"kube-scheduler-ubuntu-20-agent-2\" (UID: \"6de72559ec804c46642b9388a6a99321\") " pod="kube-system/kube-scheduler-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434231   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/5b137b06bdfaed6743b655439322dfe0-etcd-certs\") pod \"etcd-ubuntu-20-agent-2\" (UID: \"5b137b06bdfaed6743b655439322dfe0\") " pod="kube-system/etcd-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434267   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a30c9a7effa6f4f8172b7ac23690210b-etc-ca-certificates\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"a30c9a7effa6f4f8172b7ac23690210b\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:01 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:01.434302   84276 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a30c9a7effa6f4f8172b7ac23690210b-usr-local-share-ca-certificates\") pod \"kube-apiserver-ubuntu-20-agent-2\" (UID: \"a30c9a7effa6f4f8172b7ac23690210b\") " pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.123147   84276 apiserver.go:52] "Watching apiserver"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.132832   84276 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: E0916 10:59:02.185800   84276 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ubuntu-20-agent-2\" already exists" pod="kube-system/kube-scheduler-ubuntu-20-agent-2"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: E0916 10:59:02.186757   84276 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ubuntu-20-agent-2\" already exists" pod="kube-system/kube-apiserver-ubuntu-20-agent-2"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: E0916 10:59:02.186829   84276 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ubuntu-20-agent-2\" already exists" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.200368   84276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-ubuntu-20-agent-2" podStartSLOduration=2.200348252 podStartE2EDuration="2.200348252s" podCreationTimestamp="2024-09-16 10:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:59:02.200338722 +0000 UTC m=+1.135695438" watchObservedRunningTime="2024-09-16 10:59:02.200348252 +0000 UTC m=+1.135704968"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.215467   84276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ubuntu-20-agent-2" podStartSLOduration=2.215447754 podStartE2EDuration="2.215447754s" podCreationTimestamp="2024-09-16 10:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:59:02.215356331 +0000 UTC m=+1.150713047" watchObservedRunningTime="2024-09-16 10:59:02.215447754 +0000 UTC m=+1.150804464"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.215586   84276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ubuntu-20-agent-2" podStartSLOduration=2.215580468 podStartE2EDuration="2.215580468s" podCreationTimestamp="2024-09-16 10:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:59:02.207801278 +0000 UTC m=+1.143157993" watchObservedRunningTime="2024-09-16 10:59:02.215580468 +0000 UTC m=+1.150937184"
	Sep 16 10:59:02 ubuntu-20-agent-2 kubelet[84276]: I0916 10:59:02.227430   84276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ubuntu-20-agent-2" podStartSLOduration=2.227410785 podStartE2EDuration="2.227410785s" podCreationTimestamp="2024-09-16 10:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:59:02.227403752 +0000 UTC m=+1.162760469" watchObservedRunningTime="2024-09-16 10:59:02.227410785 +0000 UTC m=+1.162767501"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (494.795µs)
helpers_test.go:263: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.128614763s)
--- FAIL: TestKubernetesUpgrade (18.48s)

                                                
                                    

Test pass (85/167)

Order passed test Duration
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 41.08
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 104.19
35 TestAddons/parallel/InspektorGadget 10.46
40 TestAddons/parallel/Headlamp 14.86
41 TestAddons/parallel/CloudSpanner 5.25
43 TestAddons/parallel/NvidiaDevicePlugin 5.23
44 TestAddons/parallel/Yakd 11.42
45 TestAddons/StoppedEnableDisable 10.84
47 TestCertExpiration 226.63
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 23.28
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 30.67
65 TestFunctional/serial/MinikubeKubectlCmd 0.12
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.39
69 TestFunctional/serial/LogsCmd 0.79
70 TestFunctional/serial/LogsFileCmd 0.83
73 TestFunctional/parallel/ConfigCmd 0.26
75 TestFunctional/parallel/DryRun 0.16
76 TestFunctional/parallel/InternationalLanguage 0.09
77 TestFunctional/parallel/StatusCmd 0.47
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
81 TestFunctional/parallel/ProfileCmd/profile_list 0.26
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
91 TestFunctional/parallel/AddonsCmd 0.11
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.25
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.19
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.85
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.37
121 TestFunctional/parallel/License 0.21
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.02
124 TestFunctional/delete_minikube_cached_images 0.02
129 TestImageBuild/serial/Setup 14.03
130 TestImageBuild/serial/NormalBuild 1.51
131 TestImageBuild/serial/BuildWithBuildArg 0.83
132 TestImageBuild/serial/BuildWithDockerIgnore 0.56
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.58
137 TestJSONOutput/start/Command 24.66
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.5
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.4
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 5.34
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 32.89
174 TestPause/serial/Start 28.87
175 TestPause/serial/SecondStartNoReconfiguration 32.04
176 TestPause/serial/Pause 0.48
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.41
179 TestPause/serial/PauseAgain 0.52
180 TestPause/serial/DeletePaused 1.7
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 70.8
197 TestStoppedBinaryUpgrade/Setup 0.52
198 TestStoppedBinaryUpgrade/Upgrade 49.39
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (56.55318ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:28.124062   11069 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:28.124316   11069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:28.124326   11069 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:28.124330   11069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:28.124538   11069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	W0916 10:22:28.124648   11069 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3763/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3763/.minikube/config/config.json: no such file or directory
	I0916 10:22:28.125166   11069 out.go:352] Setting JSON to true
	I0916 10:22:28.126075   11069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":299,"bootTime":1726481849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:28.126165   11069 start.go:139] virtualization: kvm guest
	I0916 10:22:28.128458   11069 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:22:28.128574   11069 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:22:28.128623   11069 notify.go:220] Checking for updates...
	I0916 10:22:28.130017   11069 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:28.131347   11069 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:28.132661   11069 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:22:28.134000   11069 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:22:28.135196   11069 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:28.137411   11069 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:28.137645   11069 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:28.149524   11069 out.go:97] Using the none driver based on user configuration
	I0916 10:22:28.149546   11069 start.go:297] selected driver: none
	I0916 10:22:28.149557   11069 start.go:901] validating driver "none" against <nil>
	I0916 10:22:28.149587   11069 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 10:22:28.150171   11069 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:28.150976   11069 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 10:22:28.151185   11069 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:28.151224   11069 cni.go:84] Creating CNI manager for ""
	I0916 10:22:28.151295   11069 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:22:28.151372   11069 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:28.152781   11069 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 10:22:28.153248   11069 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json ...
	I0916 10:22:28.153281   11069 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3763/.minikube/profiles/minikube/config.json: {Name:mk8d2d4268fc09048f441bc25e86c5b7f11d00d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:28.153468   11069 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:22:28.153770   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0916 10:22:28.153767   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	I0916 10:22:28.153776   11069 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet
	I0916 10:22:29.532492   11069 out.go:193] 
	W0916 10:22:29.533823   11069 out_reason.go:110] Failed to cache binaries: caching binary kubelet: download failed: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19651-3763/.minikube/cache/linux/amd64/v1.20.0/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200 0x4d1c200] Decompressors:map[bz2:0xc000600f20 gz:0xc000600f28 tar:0xc000600ed0 tar.bz2:0xc000600ee0 tar.gz:0xc000600ef0 tar.xz:0xc000600f00 tar.zst:0xc000600f10 tbz2:0xc000600ee0 tgz:0xc000600ef0 txz:0xc000600f00 tzst:0xc000600f10 xz:0xc000600f30 zip:0xc000600f40 zst:0xc000600f38] Getters:map[file:0xc00188c0c0 http:0xc001888050 https:0xc0018880a0] Dir:false ProgressListener:<nil> I
nsecure:false DisableSymlinks:false Options:[]}: stream error: stream ID 1; PROTOCOL_ERROR; received from peer
	W0916 10:22:29.533836   11069 out_reason.go:110] 
	W0916 10:22:29.535958   11069 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:22:29.537285   11069 out.go:193] 
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.338594ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:29.909323   11235 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:29.909417   11235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:29.909425   11235 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:29.909429   11235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:29.909618   11235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:22:29.910188   11235 out.go:352] Setting JSON to true
	I0916 10:22:29.911058   11235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":301,"bootTime":1726481849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:29.911154   11235 start.go:139] virtualization: kvm guest
	I0916 10:22:29.913448   11235 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:22:29.913545   11235 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:22:29.913589   11235 notify.go:220] Checking for updates...
	I0916 10:22:29.914927   11235 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:29.916428   11235 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:29.917792   11235 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:22:29.919328   11235 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:22:29.920611   11235 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:40127 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (41.08s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.487790686s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.589902518s)
--- PASS: TestOffline (41.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (44.796803ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (43.53424ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (104.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m44.185314496s)
--- PASS: TestAddons/Setup (104.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zt2b4" [c0a97873-e0c3-41a1-af0b-2ece8d95b20a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003925028s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.458417423s)
--- PASS: TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-pqvqn" [d6edb9ff-b47c-4f5d-b771-7a9c07d26049] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-pqvqn" [d6edb9ff-b47c-4f5d-b771-7a9c07d26049] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pqvqn" [d6edb9ff-b47c-4f5d-b771-7a9c07d26049] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pqvqn" [d6edb9ff-b47c-4f5d-b771-7a9c07d26049] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004288695s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.403184604s)
--- PASS: TestAddons/parallel/Headlamp (14.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-7x6cj" [3bd17112-ef61-4e71-a968-3dfab95d9033] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003051841s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dcrh9" [ea92c06a-bdf2-4869-826f-9e7e50c03206] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004006203s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ggfmd" [bbdabfe7-fc70-4d1d-8d05-bab88ba1c48e] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003858606s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.416522622s)
--- PASS: TestAddons/parallel/Yakd (11.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.84s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.547498711s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.84s)

                                                
                                    
x
+
TestCertExpiration (226.63s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.458651776s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.49577607s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.674178116s)
--- PASS: TestCertExpiration (226.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19651-3763/.minikube/files/etc/test/nested/copy/11057/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (23.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (23.27528426s)
--- PASS: TestFunctional/serial/StartWithProxy (23.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.668694949s)
functional_test.go:663: soft start took 30.669203819s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.67s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.391017205s)
functional_test.go:761: restart took 37.391106065s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd4150176622/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (43.02944ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.287486ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (81.939846ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:50:16.840612   49492 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:16.840751   49492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:16.840762   49492 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:16.840767   49492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:16.841298   49492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:16.842343   49492 out.go:352] Setting JSON to false
	I0916 10:50:16.843458   49492 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:16.843561   49492 start.go:139] virtualization: kvm guest
	I0916 10:50:16.845396   49492 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:16.846703   49492 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:16.846728   49492 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:16.846797   49492 notify.go:220] Checking for updates...
	I0916 10:50:16.849416   49492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:16.850585   49492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:16.851847   49492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:16.853053   49492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:16.854222   49492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:16.856037   49492 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:16.856477   49492 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:16.859631   49492 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:16.871707   49492 out.go:177] * Using the none driver based on existing profile
	I0916 10:50:16.873086   49492 start.go:297] selected driver: none
	I0916 10:50:16.873112   49492 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:16.873293   49492 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:16.873333   49492 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:16.873748   49492 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0916 10:50:16.876003   49492 out.go:201] 
	W0916 10:50:16.877250   49492 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:50:16.878464   49492 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (86.55015ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:50:17.013809   49522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:50:17.013928   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.013940   49522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:50:17.013947   49522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:50:17.014283   49522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3763/.minikube/bin
	I0916 10:50:17.014884   49522 out.go:352] Setting JSON to false
	I0916 10:50:17.016300   49522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1968,"bootTime":1726481849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:50:17.016418   49522 start.go:139] virtualization: kvm guest
	I0916 10:50:17.018914   49522 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 10:50:17.020443   49522 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3763/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:50:17.020481   49522 notify.go:220] Checking for updates...
	I0916 10:50:17.020483   49522 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:50:17.021852   49522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:50:17.023292   49522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig
	I0916 10:50:17.024682   49522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube
	I0916 10:50:17.025975   49522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:50:17.027472   49522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:50:17.029411   49522 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:50:17.029834   49522 exec_runner.go:51] Run: systemctl --version
	I0916 10:50:17.032099   49522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:50:17.042311   49522 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 10:50:17.043885   49522 start.go:297] selected driver: none
	I0916 10:50:17.043900   49522 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:50:17.044037   49522 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:50:17.044058   49522 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 10:50:17.044345   49522 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 10:50:17.046514   49522 out.go:201] 
	W0916 10:50:17.047718   49522 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:50:17.049056   49522 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "208.051002ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.897412ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "212.681079ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "54.585021ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 51632: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.1886715s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.84648552s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.028733602s)
--- PASS: TestImageBuild/serial/Setup (14.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.511553102s)
--- PASS: TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (24.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (24.662136249s)
--- PASS: TestJSONOutput/start/Command (24.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.34034834s)
--- PASS: TestJSONOutput/stop/Command (5.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.374958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"886a0d1c-104e-4669-b28e-2afbce10668b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"940dd51d-7efa-4fc2-bd99-866323b452c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"780fe822-7414-4ac1-9db5-5f076dbd15c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2646f13-dde5-49b3-a68a-abfa1fd61167","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3763/kubeconfig"}}
	{"specversion":"1.0","id":"db919f3a-64a3-4f0b-9dfe-5ad97dbd0343","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3763/.minikube"}}
	{"specversion":"1.0","id":"aa1b87b3-27ce-4bc4-8abc-f21f3ff22116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7c06c6f9-0fc5-4a7d-b4d6-9dd1885a57f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e43bed2-f174-4a17-a4a1-2bed849a79b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (32.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.000852774s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.049037128s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.224938489s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (32.89s)

                                                
                                    
x
+
TestPause/serial/Start (28.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.871973608s)
--- PASS: TestPause/serial/Start (28.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.035002185s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (125.626346ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.703783876s)
--- PASS: TestPause/serial/DeletePaused (1.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2465017182 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2465017182 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (31.294020631s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.898185595s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.914122571s)
--- PASS: TestRunningBinaryUpgrade (70.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1351968760 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1351968760 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.800162273s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1351968760 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1351968760 -p minikube stop: (23.640211899s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (10.949704071s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.13
189 TestStartStop/group/newest-cni 0.12
190 TestStartStop/group/default-k8s-diff-port 0.13
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard