=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.982822ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0919 18:51:04.575308 14476 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:51:04.575331 14476 kapi.go:107] duration metric: took 4.306263ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002638766s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003625675s
addons_test.go:342: (dbg) Run: kubectl --context addons-807343 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.068370206s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p addons-807343 ip
2024/09/19 18:52:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p addons-807343 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-807343
helpers_test.go:235: (dbg) docker inspect addons-807343:
-- stdout --
[
{
"Id": "aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3",
"Created": "2024-09-19T18:39:21.083354509Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 16549,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-19T18:39:21.204764889Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
"ResolvConfPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/hostname",
"HostsPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/hosts",
"LogPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3-json.log",
"Name": "/addons-807343",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-807343:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "addons-807343",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e-init/diff:/var/lib/docker/overlay2/a747039cf8c6806beef023824f909e863f6f9c2668e5d190ac4e313f702c001e/diff",
"MergedDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/merged",
"UpperDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/diff",
"WorkDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-807343",
"Source": "/var/lib/docker/volumes/addons-807343/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-807343",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-807343",
"name.minikube.sigs.k8s.io": "addons-807343",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "893364c9432bbb73ed97baaf3c2546a6b86c2aa8734883a56cbcd5a406e8bc46",
"SandboxKey": "/var/run/docker/netns/893364c9432b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-807343": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "2a3f14ee82ebab40d332e259f56e05ea2ce6e3875077d28be72472d8bcb46737",
"EndpointID": "c757973548812db48ce264fa61f5ca1271f4a59b91b82f2828499cc056c04e70",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-807343",
"aef97022a03b"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-807343 -n addons-807343
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-807343 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| delete | -p download-docker-260378 | download-docker-260378 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
| start | --download-only -p | binary-mirror-546000 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | |
| | binary-mirror-546000 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:33185 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-546000 | binary-mirror-546000 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
| addons | disable dashboard -p | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | |
| | addons-807343 | | | | | |
| addons | enable dashboard -p | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | |
| | addons-807343 | | | | | |
| start | -p addons-807343 --wait=true | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:42 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:42 UTC | 19 Sep 24 18:43 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ssh | addons-807343 ssh cat | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | /opt/local-path-provisioner/pvc-ac5b37a8-6b22-43fd-8e57-431a7ab03924_default_test-pvc/file1 | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable cloud-spanner -p | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | addons-807343 | | | | | |
| addons | enable headlamp | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | -p addons-807343 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-807343 addons | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-807343 addons | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-807343 addons | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | helm-tiller --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
| | -p addons-807343 | | | | | |
| addons | disable inspektor-gadget -p | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:52 UTC |
| | addons-807343 | | | | | |
| ssh | addons-807343 ssh curl -s | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-807343 ip | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| | ingress --alsologtostderr -v=1 | | | | | |
| ip | addons-807343 ip | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| addons | addons-807343 addons disable | addons-807343 | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/19 18:38:57
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0919 18:38:57.769495 15785 out.go:345] Setting OutFile to fd 1 ...
I0919 18:38:57.769590 15785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:38:57.769598 15785 out.go:358] Setting ErrFile to fd 2...
I0919 18:38:57.769603 15785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:38:57.769759 15785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:38:57.770270 15785 out.go:352] Setting JSON to false
I0919 18:38:57.771052 15785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1280,"bootTime":1726769858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0919 18:38:57.771156 15785 start.go:139] virtualization: kvm guest
I0919 18:38:57.773048 15785 out.go:177] * [addons-807343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0919 18:38:57.774079 15785 out.go:177] - MINIKUBE_LOCATION=19664
I0919 18:38:57.774083 15785 notify.go:220] Checking for updates...
I0919 18:38:57.775989 15785 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0919 18:38:57.777123 15785 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
I0919 18:38:57.778176 15785 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
I0919 18:38:57.779187 15785 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0919 18:38:57.780208 15785 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0919 18:38:57.781428 15785 driver.go:394] Setting default libvirt URI to qemu:///system
I0919 18:38:57.801472 15785 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0919 18:38:57.801539 15785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 18:38:57.843807 15785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:38:57.835623197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0919 18:38:57.843957 15785 docker.go:318] overlay module found
I0919 18:38:57.845564 15785 out.go:177] * Using the docker driver based on user configuration
I0919 18:38:57.846478 15785 start.go:297] selected driver: docker
I0919 18:38:57.846490 15785 start.go:901] validating driver "docker" against <nil>
I0919 18:38:57.846503 15785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0919 18:38:57.847471 15785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 18:38:57.889589 15785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:38:57.881813913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0919 18:38:57.889780 15785 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0919 18:38:57.889993 15785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0919 18:38:57.891441 15785 out.go:177] * Using Docker driver with root privileges
I0919 18:38:57.892507 15785 cni.go:84] Creating CNI manager for ""
I0919 18:38:57.892557 15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0919 18:38:57.892567 15785 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0919 18:38:57.892616 15785 start.go:340] cluster config:
{Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0919 18:38:57.893673 15785 out.go:177] * Starting "addons-807343" primary control-plane node in "addons-807343" cluster
I0919 18:38:57.894637 15785 cache.go:121] Beginning downloading kic base image for docker with docker
I0919 18:38:57.895627 15785 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
I0919 18:38:57.896558 15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 18:38:57.896588 15785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I0919 18:38:57.896597 15785 cache.go:56] Caching tarball of preloaded images
I0919 18:38:57.896645 15785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
I0919 18:38:57.896679 15785 preload.go:172] Found /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 18:38:57.896689 15785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0919 18:38:57.897025 15785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json ...
I0919 18:38:57.897048 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json: {Name:mkbc202ab93ac6c9af3368c03dc9b7ef5c44a6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:38:57.910756 15785 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
I0919 18:38:57.910832 15785 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
I0919 18:38:57.910844 15785 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
I0919 18:38:57.910848 15785 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
I0919 18:38:57.910854 15785 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
I0919 18:38:57.910861 15785 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
I0919 18:39:09.564058 15785 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
I0919 18:39:09.564091 15785 cache.go:194] Successfully downloaded all kic artifacts
I0919 18:39:09.564125 15785 start.go:360] acquireMachinesLock for addons-807343: {Name:mk65a2ec792cea9016395641b31b3f3ce57d8e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 18:39:09.564205 15785 start.go:364] duration metric: took 63.392µs to acquireMachinesLock for "addons-807343"
I0919 18:39:09.564224 15785 start.go:93] Provisioning new machine with config: &{Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0919 18:39:09.564288 15785 start.go:125] createHost starting for "" (driver="docker")
I0919 18:39:09.565738 15785 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0919 18:39:09.565992 15785 start.go:159] libmachine.API.Create for "addons-807343" (driver="docker")
I0919 18:39:09.566023 15785 client.go:168] LocalClient.Create starting
I0919 18:39:09.566116 15785 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem
I0919 18:39:09.652101 15785 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem
I0919 18:39:09.947918 15785 cli_runner.go:164] Run: docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 18:39:09.963293 15785 cli_runner.go:211] docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 18:39:09.963350 15785 network_create.go:284] running [docker network inspect addons-807343] to gather additional debugging logs...
I0919 18:39:09.963366 15785 cli_runner.go:164] Run: docker network inspect addons-807343
W0919 18:39:09.977079 15785 cli_runner.go:211] docker network inspect addons-807343 returned with exit code 1
I0919 18:39:09.977100 15785 network_create.go:287] error running [docker network inspect addons-807343]: docker network inspect addons-807343: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-807343 not found
I0919 18:39:09.977118 15785 network_create.go:289] output of [docker network inspect addons-807343]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-807343 not found
** /stderr **
I0919 18:39:09.977206 15785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 18:39:09.991120 15785 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001adaa40}
I0919 18:39:09.991158 15785 network_create.go:124] attempt to create docker network addons-807343 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0919 18:39:09.991195 15785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-807343 addons-807343
I0919 18:39:10.044425 15785 network_create.go:108] docker network addons-807343 192.168.49.0/24 created
I0919 18:39:10.044455 15785 kic.go:121] calculated static IP "192.168.49.2" for the "addons-807343" container
I0919 18:39:10.044514 15785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0919 18:39:10.057376 15785 cli_runner.go:164] Run: docker volume create addons-807343 --label name.minikube.sigs.k8s.io=addons-807343 --label created_by.minikube.sigs.k8s.io=true
I0919 18:39:10.072446 15785 oci.go:103] Successfully created a docker volume addons-807343
I0919 18:39:10.072519 15785 cli_runner.go:164] Run: docker run --rm --name addons-807343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --entrypoint /usr/bin/test -v addons-807343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
I0919 18:39:17.212365 15785 cli_runner.go:217] Completed: docker run --rm --name addons-807343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --entrypoint /usr/bin/test -v addons-807343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (7.139790141s)
I0919 18:39:17.212392 15785 oci.go:107] Successfully prepared a docker volume addons-807343
I0919 18:39:17.212414 15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 18:39:17.212435 15785 kic.go:194] Starting extracting preloaded images to volume ...
I0919 18:39:17.212496 15785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-807343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
I0919 18:39:21.026329 15785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-807343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.813800374s)
I0919 18:39:21.026361 15785 kic.go:203] duration metric: took 3.813924362s to extract preloaded images to volume ...
W0919 18:39:21.026469 15785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0919 18:39:21.026550 15785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0919 18:39:21.069551 15785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-807343 --name addons-807343 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-807343 --network addons-807343 --ip 192.168.49.2 --volume addons-807343:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
I0919 18:39:21.360007 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Running}}
I0919 18:39:21.377856 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:21.395214 15785 cli_runner.go:164] Run: docker exec addons-807343 stat /var/lib/dpkg/alternatives/iptables
I0919 18:39:21.436072 15785 oci.go:144] the created container "addons-807343" has a running status.
I0919 18:39:21.436111 15785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa...
I0919 18:39:21.742892 15785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0919 18:39:21.761623 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:21.778849 15785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0919 18:39:21.778869 15785 kic_runner.go:114] Args: [docker exec --privileged addons-807343 chown docker:docker /home/docker/.ssh/authorized_keys]
I0919 18:39:21.825918 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:21.843680 15785 machine.go:93] provisionDockerMachine start ...
I0919 18:39:21.843771 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:21.859898 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:21.860112 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:21.860126 15785 main.go:141] libmachine: About to run SSH command:
hostname
I0919 18:39:21.998104 15785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807343
I0919 18:39:21.998128 15785 ubuntu.go:169] provisioning hostname "addons-807343"
I0919 18:39:21.998187 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.015227 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:22.015473 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:22.015492 15785 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-807343 && echo "addons-807343" | sudo tee /etc/hostname
I0919 18:39:22.159919 15785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807343
I0919 18:39:22.159986 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.176807 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:22.177000 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:22.177019 15785 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-807343' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-807343/g' /etc/hosts;
else
echo '127.0.1.1 addons-807343' | sudo tee -a /etc/hosts;
fi
fi
I0919 18:39:22.302391 15785 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 18:39:22.302414 15785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7708/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7708/.minikube}
I0919 18:39:22.302428 15785 ubuntu.go:177] setting up certificates
I0919 18:39:22.302439 15785 provision.go:84] configureAuth start
I0919 18:39:22.302489 15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
I0919 18:39:22.317859 15785 provision.go:143] copyHostCerts
I0919 18:39:22.317919 15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/ca.pem (1078 bytes)
I0919 18:39:22.318016 15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/cert.pem (1123 bytes)
I0919 18:39:22.318073 15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/key.pem (1675 bytes)
I0919 18:39:22.318122 15785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem org=jenkins.addons-807343 san=[127.0.0.1 192.168.49.2 addons-807343 localhost minikube]
I0919 18:39:22.454290 15785 provision.go:177] copyRemoteCerts
I0919 18:39:22.454341 15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 18:39:22.454389 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.470144 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:22.562481 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 18:39:22.582322 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0919 18:39:22.601621 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0919 18:39:22.620424 15785 provision.go:87] duration metric: took 317.975428ms to configureAuth
I0919 18:39:22.620443 15785 ubuntu.go:193] setting minikube options for container-runtime
I0919 18:39:22.620613 15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:39:22.620656 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.636292 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:22.636473 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:22.636488 15785 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 18:39:22.762837 15785 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0919 18:39:22.762862 15785 ubuntu.go:71] root file system type: overlay
I0919 18:39:22.763303 15785 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 18:39:22.763406 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.779566 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:22.779769 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:22.779849 15785 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 18:39:22.916013 15785 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0919 18:39:22.916084 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:22.931898 15785 main.go:141] libmachine: Using SSH client type: native
I0919 18:39:22.932052 15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0919 18:39:22.932068 15785 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 18:39:23.571247 15785 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:41.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-19 18:39:22.907379771 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0919 18:39:23.571276 15785 machine.go:96] duration metric: took 1.727574924s to provisionDockerMachine
I0919 18:39:23.571288 15785 client.go:171] duration metric: took 14.005257278s to LocalClient.Create
I0919 18:39:23.571306 15785 start.go:167] duration metric: took 14.005314967s to libmachine.API.Create "addons-807343"
I0919 18:39:23.571315 15785 start.go:293] postStartSetup for "addons-807343" (driver="docker")
I0919 18:39:23.571327 15785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 18:39:23.571391 15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 18:39:23.571436 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:23.587126 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:23.682960 15785 ssh_runner.go:195] Run: cat /etc/os-release
I0919 18:39:23.685630 15785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0919 18:39:23.685664 15785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0919 18:39:23.685676 15785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0919 18:39:23.685685 15785 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0919 18:39:23.685699 15785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7708/.minikube/addons for local assets ...
I0919 18:39:23.685759 15785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7708/.minikube/files for local assets ...
I0919 18:39:23.685789 15785 start.go:296] duration metric: took 114.468091ms for postStartSetup
I0919 18:39:23.686049 15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
I0919 18:39:23.702153 15785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json ...
I0919 18:39:23.702397 15785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0919 18:39:23.702444 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:23.717641 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:23.811378 15785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0919 18:39:23.815165 15785 start.go:128] duration metric: took 14.250864255s to createHost
I0919 18:39:23.815189 15785 start.go:83] releasing machines lock for "addons-807343", held for 14.250973949s
I0919 18:39:23.815253 15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
I0919 18:39:23.830836 15785 ssh_runner.go:195] Run: cat /version.json
I0919 18:39:23.830865 15785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 18:39:23.830881 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:23.830926 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:23.846376 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:23.846756 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:23.934289 15785 ssh_runner.go:195] Run: systemctl --version
I0919 18:39:24.004699 15785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0919 18:39:24.008795 15785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0919 18:39:24.030163 15785 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0919 18:39:24.030234 15785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 18:39:24.053740 15785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0919 18:39:24.053763 15785 start.go:495] detecting cgroup driver to use...
I0919 18:39:24.053792 15785 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0919 18:39:24.053884 15785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 18:39:24.067394 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0919 18:39:24.075704 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 18:39:24.084089 15785 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 18:39:24.084137 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 18:39:24.092527 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 18:39:24.100742 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 18:39:24.108907 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 18:39:24.117093 15785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 18:39:24.124700 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 18:39:24.132811 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0919 18:39:24.140816 15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0919 18:39:24.149218 15785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 18:39:24.156295 15785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 18:39:24.163264 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:24.237676 15785 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 18:39:24.309554 15785 start.go:495] detecting cgroup driver to use...
I0919 18:39:24.309610 15785 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0919 18:39:24.309659 15785 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 18:39:24.321516 15785 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0919 18:39:24.321585 15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 18:39:24.333943 15785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 18:39:24.348994 15785 ssh_runner.go:195] Run: which cri-dockerd
I0919 18:39:24.352406 15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 18:39:24.360693 15785 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0919 18:39:24.376094 15785 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 18:39:24.468983 15785 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 18:39:24.544209 15785 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0919 18:39:24.544355 15785 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0919 18:39:24.569328 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:24.647033 15785 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 18:39:24.883161 15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0919 18:39:24.893297 15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0919 18:39:24.903040 15785 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 18:39:24.977219 15785 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 18:39:25.058059 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:25.130328 15785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 18:39:25.141773 15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0919 18:39:25.151266 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:25.221331 15785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0919 18:39:25.276597 15785 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0919 18:39:25.276675 15785 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0919 18:39:25.279901 15785 start.go:563] Will wait 60s for crictl version
I0919 18:39:25.279947 15785 ssh_runner.go:195] Run: which crictl
I0919 18:39:25.283042 15785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0919 18:39:25.312857 15785 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0919 18:39:25.312919 15785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 18:39:25.333521 15785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 18:39:25.356400 15785 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0919 18:39:25.356474 15785 cli_runner.go:164] Run: docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 18:39:25.371020 15785 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0919 18:39:25.374105 15785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 18:39:25.383284 15785 kubeadm.go:883] updating cluster {Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0919 18:39:25.383394 15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 18:39:25.383451 15785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0919 18:39:25.400716 15785 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0919 18:39:25.400735 15785 docker.go:615] Images already preloaded, skipping extraction
I0919 18:39:25.400784 15785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0919 18:39:25.417050 15785 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0919 18:39:25.417072 15785 cache_images.go:84] Images are preloaded, skipping loading
I0919 18:39:25.417081 15785 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0919 18:39:25.417174 15785 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-807343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0919 18:39:25.417231 15785 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0919 18:39:25.459623 15785 cni.go:84] Creating CNI manager for ""
I0919 18:39:25.459648 15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0919 18:39:25.459659 15785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0919 18:39:25.459681 15785 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-807343 NodeName:addons-807343 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0919 18:39:25.459865 15785 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-807343"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0919 18:39:25.459927 15785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0919 18:39:25.467537 15785 binaries.go:44] Found k8s binaries, skipping transfer
I0919 18:39:25.467595 15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0919 18:39:25.474698 15785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0919 18:39:25.489321 15785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0919 18:39:25.503568 15785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0919 18:39:25.517665 15785 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0919 18:39:25.520416 15785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 18:39:25.528989 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:25.610615 15785 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0919 18:39:25.621892 15785 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343 for IP: 192.168.49.2
I0919 18:39:25.621907 15785 certs.go:194] generating shared ca certs ...
I0919 18:39:25.621920 15785 certs.go:226] acquiring lock for ca certs: {Name:mk9b3af41122a34a592ac6eeed2c52def55bc0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:25.622030 15785 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key
I0919 18:39:25.811749 15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt ...
I0919 18:39:25.811779 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt: {Name:mk88a94bc694ddec2dfbbbabbcd781f123ddd9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:25.811946 15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key ...
I0919 18:39:25.811958 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key: {Name:mkc66a8180eb661e285aadfb26501f8024a68350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:25.812040 15785 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key
I0919 18:39:25.936604 15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt ...
I0919 18:39:25.936633 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt: {Name:mk59da6c00fef0ea3e57ded18a5a446ce8386b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:25.936794 15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key ...
I0919 18:39:25.936805 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key: {Name:mk97706d2bef6bf588fc277ec34770368952dd51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:25.936882 15785 certs.go:256] generating profile certs ...
I0919 18:39:25.936937 15785 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key
I0919 18:39:25.936952 15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt with IP's: []
I0919 18:39:26.130481 15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt ...
I0919 18:39:26.130509 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: {Name:mk2af64e4bfa59ecca1ebc34fd4b54f302b8c9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.130673 15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key ...
I0919 18:39:26.130685 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key: {Name:mk7cc9b760b355caaa9de5b438ced6df5b29b8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.130758 15785 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757
I0919 18:39:26.130779 15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0919 18:39:26.291801 15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 ...
I0919 18:39:26.291831 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757: {Name:mkda54c7662691b7a5519485a4d5ca155d3460c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.291988 15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757 ...
I0919 18:39:26.292002 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757: {Name:mk26c5a88a08c0ce5c993493b938c9d87c643a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.292076 15785 certs.go:381] copying /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 -> /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt
I0919 18:39:26.292155 15785 certs.go:385] copying /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757 -> /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key
I0919 18:39:26.292209 15785 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key
I0919 18:39:26.292235 15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt with IP's: []
I0919 18:39:26.349251 15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt ...
I0919 18:39:26.349291 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt: {Name:mke88b5dc76835a1e2d726f450c48f436b0d7d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.349469 15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key ...
I0919 18:39:26.349482 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key: {Name:mk091ff61abd42dff135c6e85dbd56e53e007fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:26.349673 15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem (1675 bytes)
I0919 18:39:26.349706 15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem (1078 bytes)
I0919 18:39:26.349728 15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem (1123 bytes)
I0919 18:39:26.349752 15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem (1675 bytes)
I0919 18:39:26.350300 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0919 18:39:26.370865 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0919 18:39:26.390309 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0919 18:39:26.409440 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0919 18:39:26.428290 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0919 18:39:26.447263 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0919 18:39:26.466154 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0919 18:39:26.485240 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0919 18:39:26.504162 15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0919 18:39:26.523192 15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0919 18:39:26.537647 15785 ssh_runner.go:195] Run: openssl version
I0919 18:39:26.542410 15785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0919 18:39:26.550042 15785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0919 18:39:26.552907 15785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
I0919 18:39:26.552955 15785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0919 18:39:26.558581 15785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0919 18:39:26.566026 15785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0919 18:39:26.568809 15785 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0919 18:39:26.568857 15785 kubeadm.go:392] StartCluster: {Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0919 18:39:26.568948 15785 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0919 18:39:26.584613 15785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0919 18:39:26.591832 15785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0919 18:39:26.599095 15785 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0919 18:39:26.599141 15785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0919 18:39:26.606021 15785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0919 18:39:26.606037 15785 kubeadm.go:157] found existing configuration files:
I0919 18:39:26.606064 15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0919 18:39:26.613144 15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0919 18:39:26.613187 15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0919 18:39:26.619993 15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0919 18:39:26.626610 15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0919 18:39:26.626649 15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0919 18:39:26.633254 15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0919 18:39:26.640138 15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0919 18:39:26.640171 15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0919 18:39:26.646789 15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0919 18:39:26.653622 15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0919 18:39:26.653655 15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0919 18:39:26.660309 15785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0919 18:39:26.693756 15785 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0919 18:39:26.693823 15785 kubeadm.go:310] [preflight] Running pre-flight checks
I0919 18:39:26.711253 15785 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0919 18:39:26.711532 15785 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1069-gcp[0m
I0919 18:39:26.711593 15785 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0919 18:39:26.711657 15785 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0919 18:39:26.711730 15785 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0919 18:39:26.711800 15785 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0919 18:39:26.711867 15785 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0919 18:39:26.711937 15785 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0919 18:39:26.712005 15785 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0919 18:39:26.712065 15785 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0919 18:39:26.712115 15785 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0919 18:39:26.712169 15785 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0919 18:39:26.758966 15785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0919 18:39:26.759126 15785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0919 18:39:26.759237 15785 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0919 18:39:26.768259 15785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0919 18:39:26.770830 15785 out.go:235] - Generating certificates and keys ...
I0919 18:39:26.770936 15785 kubeadm.go:310] [certs] Using existing ca certificate authority
I0919 18:39:26.771037 15785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0919 18:39:26.962502 15785 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0919 18:39:27.179993 15785 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0919 18:39:27.270096 15785 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0919 18:39:27.429393 15785 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0919 18:39:27.634812 15785 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0919 18:39:27.634924 15785 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-807343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0919 18:39:27.877591 15785 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0919 18:39:27.877710 15785 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-807343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0919 18:39:27.927256 15785 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0919 18:39:28.094484 15785 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0919 18:39:28.312183 15785 kubeadm.go:310] [certs] Generating "sa" key and public key
I0919 18:39:28.312289 15785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0919 18:39:28.446828 15785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0919 18:39:28.520358 15785 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0919 18:39:28.786269 15785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0919 18:39:28.840547 15785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0919 18:39:28.938641 15785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0919 18:39:28.939165 15785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0919 18:39:28.941424 15785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0919 18:39:28.944270 15785 out.go:235] - Booting up control plane ...
I0919 18:39:28.944397 15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0919 18:39:28.944486 15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0919 18:39:28.944561 15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0919 18:39:28.952798 15785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0919 18:39:28.957522 15785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0919 18:39:28.957590 15785 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0919 18:39:29.039800 15785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0919 18:39:29.039916 15785 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0919 18:39:29.541218 15785 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.456511ms
I0919 18:39:29.541296 15785 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0919 18:39:34.042383 15785 kubeadm.go:310] [api-check] The API server is healthy after 4.501228564s
I0919 18:39:34.053763 15785 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0919 18:39:34.062315 15785 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0919 18:39:34.076171 15785 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0919 18:39:34.076333 15785 kubeadm.go:310] [mark-control-plane] Marking the node addons-807343 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0919 18:39:34.082777 15785 kubeadm.go:310] [bootstrap-token] Using token: jppcbr.ipmzwmexwii5boyd
I0919 18:39:34.083888 15785 out.go:235] - Configuring RBAC rules ...
I0919 18:39:34.084024 15785 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0919 18:39:34.086696 15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0919 18:39:34.091730 15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0919 18:39:34.093714 15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0919 18:39:34.095697 15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0919 18:39:34.097532 15785 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0919 18:39:34.446856 15785 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0919 18:39:34.887992 15785 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0919 18:39:35.447727 15785 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0919 18:39:35.449363 15785 kubeadm.go:310]
I0919 18:39:35.449436 15785 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0919 18:39:35.449450 15785 kubeadm.go:310]
I0919 18:39:35.449529 15785 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0919 18:39:35.449537 15785 kubeadm.go:310]
I0919 18:39:35.449558 15785 kubeadm.go:310] mkdir -p $HOME/.kube
I0919 18:39:35.449625 15785 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0919 18:39:35.449669 15785 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0919 18:39:35.449675 15785 kubeadm.go:310]
I0919 18:39:35.449727 15785 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0919 18:39:35.449734 15785 kubeadm.go:310]
I0919 18:39:35.449773 15785 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0919 18:39:35.449779 15785 kubeadm.go:310]
I0919 18:39:35.449822 15785 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0919 18:39:35.449890 15785 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0919 18:39:35.449952 15785 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0919 18:39:35.449958 15785 kubeadm.go:310]
I0919 18:39:35.450087 15785 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0919 18:39:35.450195 15785 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0919 18:39:35.450217 15785 kubeadm.go:310]
I0919 18:39:35.450329 15785 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jppcbr.ipmzwmexwii5boyd \
I0919 18:39:35.450466 15785 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e0fcc53032e0b18914406382acdde1a617457fe4835684fffa9f8c03161aa32e \
I0919 18:39:35.450503 15785 kubeadm.go:310] --control-plane
I0919 18:39:35.450519 15785 kubeadm.go:310]
I0919 18:39:35.450622 15785 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0919 18:39:35.450632 15785 kubeadm.go:310]
I0919 18:39:35.450742 15785 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jppcbr.ipmzwmexwii5boyd \
I0919 18:39:35.450881 15785 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e0fcc53032e0b18914406382acdde1a617457fe4835684fffa9f8c03161aa32e
I0919 18:39:35.452782 15785 kubeadm.go:310] W0919 18:39:26.691424 1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0919 18:39:35.453072 15785 kubeadm.go:310] W0919 18:39:26.691966 1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0919 18:39:35.453270 15785 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
I0919 18:39:35.453363 15785 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0919 18:39:35.453386 15785 cni.go:84] Creating CNI manager for ""
I0919 18:39:35.453404 15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0919 18:39:35.454864 15785 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0919 18:39:35.456073 15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0919 18:39:35.463941 15785 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0919 18:39:35.479046 15785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0919 18:39:35.479126 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:35.479178 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-807343 minikube.k8s.io/updated_at=2024_09_19T18_39_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-807343 minikube.k8s.io/primary=true
I0919 18:39:35.569970 15785 ops.go:34] apiserver oom_adj: -16
I0919 18:39:35.582355 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:36.083176 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:36.582986 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:37.083332 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:37.583026 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:38.083116 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:38.583366 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:39.082554 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:39.583201 15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 18:39:39.645539 15785 kubeadm.go:1113] duration metric: took 4.166483325s to wait for elevateKubeSystemPrivileges
I0919 18:39:39.645571 15785 kubeadm.go:394] duration metric: took 13.076716929s to StartCluster
I0919 18:39:39.645590 15785 settings.go:142] acquiring lock: {Name:mk64b5a5d79680fb0b250d268808142029c49502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:39.645687 15785 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19664-7708/kubeconfig
I0919 18:39:39.646010 15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/kubeconfig: {Name:mk4b292ae80d4376ae5eb287b2c4e3e0d9b1ffde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 18:39:39.646175 15785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0919 18:39:39.646184 15785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0919 18:39:39.646254 15785 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0919 18:39:39.646386 15785 addons.go:69] Setting yakd=true in profile "addons-807343"
I0919 18:39:39.646401 15785 addons.go:69] Setting gcp-auth=true in profile "addons-807343"
I0919 18:39:39.646410 15785 addons.go:234] Setting addon yakd=true in "addons-807343"
I0919 18:39:39.646419 15785 addons.go:69] Setting inspektor-gadget=true in profile "addons-807343"
I0919 18:39:39.646430 15785 mustload.go:65] Loading cluster: addons-807343
I0919 18:39:39.646437 15785 addons.go:234] Setting addon inspektor-gadget=true in "addons-807343"
I0919 18:39:39.646437 15785 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-807343"
I0919 18:39:39.646437 15785 addons.go:69] Setting cloud-spanner=true in profile "addons-807343"
I0919 18:39:39.646455 15785 addons.go:69] Setting volcano=true in profile "addons-807343"
I0919 18:39:39.646465 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646412 15785 addons.go:69] Setting ingress-dns=true in profile "addons-807343"
I0919 18:39:39.646477 15785 addons.go:69] Setting storage-provisioner=true in profile "addons-807343"
I0919 18:39:39.646479 15785 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-807343"
I0919 18:39:39.646492 15785 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-807343"
I0919 18:39:39.646504 15785 addons.go:69] Setting helm-tiller=true in profile "addons-807343"
I0919 18:39:39.646517 15785 addons.go:234] Setting addon helm-tiller=true in "addons-807343"
I0919 18:39:39.646537 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646401 15785 addons.go:69] Setting registry=true in profile "addons-807343"
I0919 18:39:39.646576 15785 addons.go:234] Setting addon registry=true in "addons-807343"
I0919 18:39:39.646602 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646602 15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:39:39.646465 15785 addons.go:69] Setting metrics-server=true in profile "addons-807343"
I0919 18:39:39.646696 15785 addons.go:234] Setting addon metrics-server=true in "addons-807343"
I0919 18:39:39.646718 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646539 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646415 15785 addons.go:69] Setting default-storageclass=true in profile "addons-807343"
I0919 18:39:39.646846 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646493 15785 addons.go:234] Setting addon storage-provisioner=true in "addons-807343"
I0919 18:39:39.646945 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646992 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.647021 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.647088 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.647163 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.647229 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646844 15785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-807343"
I0919 18:39:39.647438 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.647669 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646469 15785 addons.go:234] Setting addon volcano=true in "addons-807343"
I0919 18:39:39.648086 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.648663 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646388 15785 addons.go:69] Setting ingress=true in profile "addons-807343"
I0919 18:39:39.648898 15785 addons.go:234] Setting addon ingress=true in "addons-807343"
I0919 18:39:39.646468 15785 addons.go:234] Setting addon cloud-spanner=true in "addons-807343"
I0919 18:39:39.649005 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.646445 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.649579 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646495 15785 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-807343"
I0919 18:39:39.649801 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.649908 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646388 15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:39:39.646493 15785 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-807343"
I0919 18:39:39.649997 15785 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-807343"
I0919 18:39:39.646454 15785 addons.go:69] Setting volumesnapshots=true in profile "addons-807343"
I0919 18:39:39.650137 15785 addons.go:234] Setting addon volumesnapshots=true in "addons-807343"
I0919 18:39:39.650163 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.648960 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.650275 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.646483 15785 addons.go:234] Setting addon ingress-dns=true in "addons-807343"
I0919 18:39:39.650558 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.648969 15785 out.go:177] * Verifying Kubernetes components...
I0919 18:39:39.651972 15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 18:39:39.672671 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.672787 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.672671 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.673219 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.685075 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.686451 15785 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0919 18:39:39.687978 15785 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0919 18:39:39.687999 15785 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0919 18:39:39.688172 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.690225 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0919 18:39:39.691209 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0919 18:39:39.692320 15785 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0919 18:39:39.693277 15785 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0919 18:39:39.695524 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0919 18:39:39.698952 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0919 18:39:39.699973 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0919 18:39:39.701052 15785 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0919 18:39:39.701966 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0919 18:39:39.701985 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0919 18:39:39.702037 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.716700 15785 out.go:177] - Using image docker.io/registry:2.8.3
I0919 18:39:39.717888 15785 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0919 18:39:39.718925 15785 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0919 18:39:39.718945 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0919 18:39:39.718996 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.719101 15785 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0919 18:39:39.720139 15785 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0919 18:39:39.720157 15785 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0919 18:39:39.720193 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.726091 15785 addons.go:234] Setting addon default-storageclass=true in "addons-807343"
I0919 18:39:39.726127 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.726546 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.727468 15785 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0919 18:39:39.728838 15785 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0919 18:39:39.730129 15785 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0919 18:39:39.732571 15785 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0919 18:39:39.732612 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0919 18:39:39.732677 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.734302 15785 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0919 18:39:39.734369 15785 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0919 18:39:39.735521 15785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0919 18:39:39.735541 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0919 18:39:39.735586 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.735849 15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0919 18:39:39.735863 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0919 18:39:39.735907 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.742065 15785 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0919 18:39:39.743716 15785 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0919 18:39:39.743957 15785 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0919 18:39:39.744757 15785 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0919 18:39:39.744774 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0919 18:39:39.744823 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.754042 15785 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0919 18:39:39.755361 15785 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0919 18:39:39.755380 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0919 18:39:39.755429 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.757120 15785 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0919 18:39:39.758249 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0919 18:39:39.758264 15785 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0919 18:39:39.758310 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.762910 15785 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0919 18:39:39.762934 15785 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0919 18:39:39.764070 15785 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0919 18:39:39.764088 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0919 18:39:39.764136 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.764324 15785 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0919 18:39:39.764337 15785 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0919 18:39:39.764386 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.765848 15785 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0919 18:39:39.767312 15785 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0919 18:39:39.767328 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0919 18:39:39.767376 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.772660 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.774038 15785 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-807343"
I0919 18:39:39.774083 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:39.774554 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:39.791138 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.792630 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.793037 15785 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0919 18:39:39.793058 15785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0919 18:39:39.793109 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.794316 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.795317 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.795734 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.812437 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.821141 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.821937 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.824156 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.827809 15785 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0919 18:39:39.830379 15785 out.go:177] - Using image docker.io/busybox:stable
I0919 18:39:39.831473 15785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0919 18:39:39.831494 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0919 18:39:39.831543 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:39.834474 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.835857 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.837358 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.838132 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.849755 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:39.871087 15785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0919 18:39:39.871195 15785 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0919 18:39:40.186871 15785 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0919 18:39:40.186961 15785 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0919 18:39:40.368885 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0919 18:39:40.370061 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0919 18:39:40.379859 15785 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0919 18:39:40.379928 15785 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0919 18:39:40.385951 15785 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0919 18:39:40.386023 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0919 18:39:40.392100 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0919 18:39:40.392123 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0919 18:39:40.482380 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0919 18:39:40.486087 15785 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0919 18:39:40.486111 15785 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0919 18:39:40.568800 15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0919 18:39:40.568886 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0919 18:39:40.570238 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0919 18:39:40.570726 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0919 18:39:40.576395 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0919 18:39:40.669560 15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0919 18:39:40.669589 15785 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0919 18:39:40.670189 15785 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0919 18:39:40.670214 15785 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0919 18:39:40.673207 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0919 18:39:40.673481 15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0919 18:39:40.673538 15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0919 18:39:40.685737 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0919 18:39:40.687656 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0919 18:39:40.779723 15785 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0919 18:39:40.779814 15785 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0919 18:39:40.782608 15785 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0919 18:39:40.782692 15785 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0919 18:39:40.785974 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0919 18:39:40.786039 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0919 18:39:40.874803 15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0919 18:39:40.874887 15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0919 18:39:40.969672 15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0919 18:39:40.969709 15785 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0919 18:39:40.981368 15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0919 18:39:40.981461 15785 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0919 18:39:40.985252 15785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0919 18:39:40.985323 15785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0919 18:39:41.187670 15785 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0919 18:39:41.187768 15785 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0919 18:39:41.267896 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0919 18:39:41.267931 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0919 18:39:41.280914 15785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0919 18:39:41.280940 15785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0919 18:39:41.483272 15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0919 18:39:41.483315 15785 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0919 18:39:41.571400 15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0919 18:39:41.571429 15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0919 18:39:41.688891 15785 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0919 18:39:41.688956 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0919 18:39:41.770515 15785 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0919 18:39:41.770592 15785 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0919 18:39:41.870144 15785 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.998906916s)
I0919 18:39:41.871107 15785 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.999989161s)
I0919 18:39:41.871262 15785 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0919 18:39:41.872202 15785 node_ready.go:35] waiting up to 6m0s for node "addons-807343" to be "Ready" ...
I0919 18:39:41.873755 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0919 18:39:41.878692 15785 node_ready.go:49] node "addons-807343" has status "Ready":"True"
I0919 18:39:41.878750 15785 node_ready.go:38] duration metric: took 6.488114ms for node "addons-807343" to be "Ready" ...
I0919 18:39:41.878841 15785 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 18:39:41.889700 15785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace to be "Ready" ...
I0919 18:39:41.972397 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0919 18:39:41.972479 15785 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0919 18:39:42.069920 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0919 18:39:42.075240 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0919 18:39:42.075317 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0919 18:39:42.173829 15785 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0919 18:39:42.173916 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0919 18:39:42.180217 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0919 18:39:42.376149 15785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-807343" context rescaled to 1 replicas
I0919 18:39:42.390234 15785 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0919 18:39:42.390257 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0919 18:39:42.569959 15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0919 18:39:42.569990 15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0919 18:39:42.682561 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0919 18:39:43.188993 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0919 18:39:43.189022 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0919 18:39:43.291834 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0919 18:39:43.770772 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0919 18:39:43.770875 15785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0919 18:39:43.980025 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:44.374980 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0919 18:39:44.375006 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0919 18:39:44.580609 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.211613055s)
I0919 18:39:45.068915 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.698766039s)
I0919 18:39:45.069261 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.586844604s)
I0919 18:39:45.169863 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0919 18:39:45.169952 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0919 18:39:45.879874 15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0919 18:39:45.879919 15785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0919 18:39:45.988792 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:46.070160 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0919 18:39:46.775488 15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0919 18:39:46.775662 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:46.799484 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:47.578618 15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0919 18:39:47.877471 15785 addons.go:234] Setting addon gcp-auth=true in "addons-807343"
I0919 18:39:47.877534 15785 host.go:66] Checking if "addons-807343" exists ...
I0919 18:39:47.878040 15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
I0919 18:39:47.901007 15785 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0919 18:39:47.901048 15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
I0919 18:39:47.917809 15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
I0919 18:39:48.472584 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:50.971717 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:51.877162 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.306371145s)
I0919 18:39:51.877251 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.306949991s)
I0919 18:39:51.877292 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.300857244s)
I0919 18:39:51.877639 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.204365694s)
I0919 18:39:51.877658 15785 addons.go:475] Verifying addon ingress=true in "addons-807343"
I0919 18:39:51.877829 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.192014171s)
I0919 18:39:51.877847 15785 addons.go:475] Verifying addon registry=true in "addons-807343"
I0919 18:39:51.878290 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.004474997s)
I0919 18:39:51.878446 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.808483441s)
I0919 18:39:51.878636 15785 addons.go:475] Verifying addon metrics-server=true in "addons-807343"
I0919 18:39:51.878506 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.698206742s)
I0919 18:39:51.878602 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.195990945s)
I0919 18:39:51.878734 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.190302446s)
I0919 18:39:51.878697 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.586832565s)
W0919 18:39:51.878768 15785 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0919 18:39:51.878790 15785 retry.go:31] will retry after 169.31069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0919 18:39:51.879703 15785 out.go:177] * Verifying registry addon...
I0919 18:39:51.879704 15785 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-807343 service yakd-dashboard -n yakd-dashboard
I0919 18:39:51.879880 15785 out.go:177] * Verifying ingress addon...
I0919 18:39:51.885615 15785 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0919 18:39:51.885616 15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0919 18:39:51.889715 15785 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0919 18:39:51.889736 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:52.048535 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0919 18:39:52.068686 15785 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0919 18:39:52.068755 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:52.391441 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:52.391634 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:52.890170 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:52.891216 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:53.190665 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.120439643s)
I0919 18:39:53.190701 15785 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.28966662s)
I0919 18:39:53.190706 15785 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-807343"
I0919 18:39:53.192279 15785 out.go:177] * Verifying csi-hostpath-driver addon...
I0919 18:39:53.192389 15785 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0919 18:39:53.194636 15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 18:39:53.196076 15785 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0919 18:39:53.197177 15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0919 18:39:53.197198 15785 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0919 18:39:53.199456 15785 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:39:53.199471 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:53.290404 15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0919 18:39:53.290428 15785 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0919 18:39:53.379776 15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0919 18:39:53.379809 15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0919 18:39:53.469691 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:53.470717 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:53.472271 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:53.491578 15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0919 18:39:53.769699 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:53.891606 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:53.891668 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:54.199301 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:54.390659 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:54.391264 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:54.580558 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.531977835s)
I0919 18:39:54.699628 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:54.889533 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:54.890682 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:54.968955 15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.477330303s)
I0919 18:39:54.970347 15785 addons.go:475] Verifying addon gcp-auth=true in "addons-807343"
I0919 18:39:54.971906 15785 out.go:177] * Verifying gcp-auth addon...
I0919 18:39:54.973873 15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0919 18:39:54.989005 15785 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0919 18:39:55.199575 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:55.390092 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:55.390546 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:55.699567 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:55.890692 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:55.891284 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:55.894135 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:56.199324 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:56.389725 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:56.389992 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:56.700164 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:56.889558 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:56.889813 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:57.199204 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:57.389401 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:57.389801 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:57.699512 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:57.889545 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:57.890534 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:57.894760 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:39:58.273726 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:58.389680 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:58.389839 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:58.699193 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:58.889408 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:58.889653 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:59.199178 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:59.389235 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:59.390135 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:39:59.699641 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:39:59.889652 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:39:59.890086 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:00.199600 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:00.389857 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:00.390211 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:00.394214 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:00.698928 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:00.889710 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:00.890101 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:01.200176 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:01.389298 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:01.389606 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:01.699016 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:01.889132 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:01.889473 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:02.199184 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:02.389375 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:02.389771 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:02.394514 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:02.699125 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:02.890049 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:02.890525 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:03.199626 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:03.388747 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:03.388967 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:03.699525 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:03.889759 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:03.890315 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:04.199815 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:04.389180 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:04.389658 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:04.699330 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:04.889963 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:04.890286 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:04.895112 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:05.199392 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:05.389968 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:05.390368 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:05.699223 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:05.890393 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:05.890757 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:06.200106 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:06.391139 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:06.391845 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:06.698525 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:06.890016 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:06.890230 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:07.198979 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:07.389511 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:07.389852 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:07.393612 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:07.699318 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:07.889173 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:07.890152 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:08.199616 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:08.389123 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:08.389534 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:08.699493 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:08.888933 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:08.889396 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:09.198817 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:09.391519 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:09.391963 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:09.395307 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:09.699691 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:09.888775 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:09.889194 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:10.199293 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:10.389647 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:10.390049 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:10.698674 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:10.889437 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:10.889647 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:11.199499 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:11.389594 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:11.390432 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:11.699325 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:11.889523 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:11.889788 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:11.894236 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:12.199321 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:12.389440 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:12.389720 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:12.699209 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:12.890270 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:12.890421 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:13.200175 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:13.390275 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:13.390863 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:13.699657 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:13.890252 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:13.890543 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:13.894747 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:14.199118 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:14.389959 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:14.390430 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:14.699756 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:14.889721 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:14.889835 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:15.198522 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:15.388932 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:15.389359 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:15.699594 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:15.889793 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:15.890307 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:16.199345 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:16.389850 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0919 18:40:16.390361 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:16.393417 15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
I0919 18:40:16.699438 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:16.889814 15785 kapi.go:107] duration metric: took 25.004195756s to wait for kubernetes.io/minikube-addons=registry ...
I0919 18:40:16.890295 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:17.199183 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:17.390022 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:17.395295 15785 pod_ready.go:93] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.395368 15785 pod_ready.go:82] duration metric: took 35.505606197s for pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.395386 15785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.396928 15785 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j7z28" not found
I0919 18:40:17.396946 15785 pod_ready.go:82] duration metric: took 1.554154ms for pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace to be "Ready" ...
E0919 18:40:17.396955 15785 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j7z28" not found
I0919 18:40:17.396961 15785 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.400896 15785 pod_ready.go:93] pod "etcd-addons-807343" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.400914 15785 pod_ready.go:82] duration metric: took 3.945864ms for pod "etcd-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.400924 15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.404814 15785 pod_ready.go:93] pod "kube-apiserver-addons-807343" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.404835 15785 pod_ready.go:82] duration metric: took 3.902185ms for pod "kube-apiserver-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.404846 15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.408587 15785 pod_ready.go:93] pod "kube-controller-manager-addons-807343" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.408603 15785 pod_ready.go:82] duration metric: took 3.750531ms for pod "kube-controller-manager-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.408612 15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ddktm" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.593370 15785 pod_ready.go:93] pod "kube-proxy-ddktm" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.593392 15785 pod_ready.go:82] duration metric: took 184.772891ms for pod "kube-proxy-ddktm" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.593403 15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.699138 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:17.978072 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:17.992993 15785 pod_ready.go:93] pod "kube-scheduler-addons-807343" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:17.993017 15785 pod_ready.go:82] duration metric: took 399.606715ms for pod "kube-scheduler-addons-807343" in "kube-system" namespace to be "Ready" ...
I0919 18:40:17.993031 15785 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace to be "Ready" ...
I0919 18:40:18.199633 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:18.389659 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:18.392965 15785 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace has status "Ready":"True"
I0919 18:40:18.392988 15785 pod_ready.go:82] duration metric: took 399.948916ms for pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace to be "Ready" ...
I0919 18:40:18.392998 15785 pod_ready.go:39] duration metric: took 36.514120678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 18:40:18.393023 15785 api_server.go:52] waiting for apiserver process to appear ...
I0919 18:40:18.393084 15785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0919 18:40:18.410793 15785 api_server.go:72] duration metric: took 38.764584516s to wait for apiserver process to appear ...
I0919 18:40:18.410817 15785 api_server.go:88] waiting for apiserver healthz status ...
I0919 18:40:18.410839 15785 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0919 18:40:18.415893 15785 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0919 18:40:18.416856 15785 api_server.go:141] control plane version: v1.31.1
I0919 18:40:18.416881 15785 api_server.go:131] duration metric: took 6.056323ms to wait for apiserver health ...
I0919 18:40:18.416890 15785 system_pods.go:43] waiting for kube-system pods to appear ...
I0919 18:40:18.599874 15785 system_pods.go:59] 18 kube-system pods found
I0919 18:40:18.599909 15785 system_pods.go:61] "coredns-7c65d6cfc9-cfl84" [eee626ef-868e-4ead-b5e6-9517454e5ff9] Running
I0919 18:40:18.599921 15785 system_pods.go:61] "csi-hostpath-attacher-0" [7b2441ca-4042-46e2-807a-db381962ac05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0919 18:40:18.599931 15785 system_pods.go:61] "csi-hostpath-resizer-0" [b8dcc11c-f567-48e9-ab17-75e5b0475393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0919 18:40:18.599942 15785 system_pods.go:61] "csi-hostpathplugin-pzn4j" [3e4889e0-e027-4eca-a4da-302b8811e298] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0919 18:40:18.599949 15785 system_pods.go:61] "etcd-addons-807343" [408b91f6-57e0-4654-88c5-a8d6b550bac6] Running
I0919 18:40:18.599956 15785 system_pods.go:61] "kube-apiserver-addons-807343" [d8bc5f24-a83f-493a-8d39-601bb010a9e5] Running
I0919 18:40:18.599961 15785 system_pods.go:61] "kube-controller-manager-addons-807343" [3966284d-f6c5-45a2-8544-7e632f3ab601] Running
I0919 18:40:18.600003 15785 system_pods.go:61] "kube-ingress-dns-minikube" [3d06f080-c6aa-4078-971c-fd8426586f6e] Running
I0919 18:40:18.600012 15785 system_pods.go:61] "kube-proxy-ddktm" [f6ad1770-b609-4aff-8863-8912236980a1] Running
I0919 18:40:18.600020 15785 system_pods.go:61] "kube-scheduler-addons-807343" [1bf6d1d5-3895-4bc0-a679-1c913857701c] Running
I0919 18:40:18.600025 15785 system_pods.go:61] "metrics-server-84c5f94fbc-d74dx" [d90ed638-b34d-4a70-a846-898f37d3a262] Running
I0919 18:40:18.600033 15785 system_pods.go:61] "nvidia-device-plugin-daemonset-4rj76" [0c3f2ba6-3e70-4d40-844b-605e747b7435] Running
I0919 18:40:18.600042 15785 system_pods.go:61] "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
I0919 18:40:18.600053 15785 system_pods.go:61] "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
I0919 18:40:18.600066 15785 system_pods.go:61] "snapshot-controller-56fcc65765-6ptq4" [b4bdc5cf-660e-4290-820f-ebce887001c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0919 18:40:18.600082 15785 system_pods.go:61] "snapshot-controller-56fcc65765-b7vgm" [ff423e87-1fec-4d13-8aef-42c22620df00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0919 18:40:18.600091 15785 system_pods.go:61] "storage-provisioner" [b72035c0-b232-4cda-9f88-42bf47f8ddc3] Running
I0919 18:40:18.600101 15785 system_pods.go:61] "tiller-deploy-b48cc5f79-vmsvx" [3388a43f-3bd2-4f3a-8975-ecd10db08a16] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0919 18:40:18.600108 15785 system_pods.go:74] duration metric: took 183.213516ms to wait for pod list to return data ...
I0919 18:40:18.600116 15785 default_sa.go:34] waiting for default service account to be created ...
I0919 18:40:18.699903 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:18.793571 15785 default_sa.go:45] found service account: "default"
I0919 18:40:18.793601 15785 default_sa.go:55] duration metric: took 193.477641ms for default service account to be created ...
I0919 18:40:18.793613 15785 system_pods.go:116] waiting for k8s-apps to be running ...
I0919 18:40:18.890364 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:19.083135 15785 system_pods.go:86] 18 kube-system pods found
I0919 18:40:19.083163 15785 system_pods.go:89] "coredns-7c65d6cfc9-cfl84" [eee626ef-868e-4ead-b5e6-9517454e5ff9] Running
I0919 18:40:19.083172 15785 system_pods.go:89] "csi-hostpath-attacher-0" [7b2441ca-4042-46e2-807a-db381962ac05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0919 18:40:19.083178 15785 system_pods.go:89] "csi-hostpath-resizer-0" [b8dcc11c-f567-48e9-ab17-75e5b0475393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0919 18:40:19.083191 15785 system_pods.go:89] "csi-hostpathplugin-pzn4j" [3e4889e0-e027-4eca-a4da-302b8811e298] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0919 18:40:19.083198 15785 system_pods.go:89] "etcd-addons-807343" [408b91f6-57e0-4654-88c5-a8d6b550bac6] Running
I0919 18:40:19.083204 15785 system_pods.go:89] "kube-apiserver-addons-807343" [d8bc5f24-a83f-493a-8d39-601bb010a9e5] Running
I0919 18:40:19.083212 15785 system_pods.go:89] "kube-controller-manager-addons-807343" [3966284d-f6c5-45a2-8544-7e632f3ab601] Running
I0919 18:40:19.083221 15785 system_pods.go:89] "kube-ingress-dns-minikube" [3d06f080-c6aa-4078-971c-fd8426586f6e] Running
I0919 18:40:19.083229 15785 system_pods.go:89] "kube-proxy-ddktm" [f6ad1770-b609-4aff-8863-8912236980a1] Running
I0919 18:40:19.083234 15785 system_pods.go:89] "kube-scheduler-addons-807343" [1bf6d1d5-3895-4bc0-a679-1c913857701c] Running
I0919 18:40:19.083240 15785 system_pods.go:89] "metrics-server-84c5f94fbc-d74dx" [d90ed638-b34d-4a70-a846-898f37d3a262] Running
I0919 18:40:19.083245 15785 system_pods.go:89] "nvidia-device-plugin-daemonset-4rj76" [0c3f2ba6-3e70-4d40-844b-605e747b7435] Running
I0919 18:40:19.083251 15785 system_pods.go:89] "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
I0919 18:40:19.083254 15785 system_pods.go:89] "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
I0919 18:40:19.083264 15785 system_pods.go:89] "snapshot-controller-56fcc65765-6ptq4" [b4bdc5cf-660e-4290-820f-ebce887001c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0919 18:40:19.083273 15785 system_pods.go:89] "snapshot-controller-56fcc65765-b7vgm" [ff423e87-1fec-4d13-8aef-42c22620df00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0919 18:40:19.083280 15785 system_pods.go:89] "storage-provisioner" [b72035c0-b232-4cda-9f88-42bf47f8ddc3] Running
I0919 18:40:19.083286 15785 system_pods.go:89] "tiller-deploy-b48cc5f79-vmsvx" [3388a43f-3bd2-4f3a-8975-ecd10db08a16] Running
I0919 18:40:19.083297 15785 system_pods.go:126] duration metric: took 289.67668ms to wait for k8s-apps to be running ...
I0919 18:40:19.083310 15785 system_svc.go:44] waiting for kubelet service to be running ....
I0919 18:40:19.083363 15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 18:40:19.095871 15785 system_svc.go:56] duration metric: took 12.554192ms WaitForService to wait for kubelet
I0919 18:40:19.095894 15785 kubeadm.go:582] duration metric: took 39.449692605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0919 18:40:19.095909 15785 node_conditions.go:102] verifying NodePressure condition ...
I0919 18:40:19.193533 15785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0919 18:40:19.193572 15785 node_conditions.go:123] node cpu capacity is 8
I0919 18:40:19.193584 15785 node_conditions.go:105] duration metric: took 97.669974ms to run NodePressure ...
I0919 18:40:19.193605 15785 start.go:241] waiting for startup goroutines ...
I0919 18:40:19.199329 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:19.389586 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:19.699865 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:19.889664 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:20.199325 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:20.389948 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:20.698535 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:20.888906 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:21.200209 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:21.389965 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:21.699168 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:21.889838 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:22.200065 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:22.389527 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:22.699501 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:22.893065 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:23.199343 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:23.390191 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:23.699260 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:23.890461 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:24.199477 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:24.389136 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:24.699221 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:24.889820 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:25.200138 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:25.389292 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:25.698417 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:25.890518 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:26.200657 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:26.389809 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:26.698799 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:26.889818 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:27.199794 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:27.389534 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:27.699533 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:27.900422 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:28.199254 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:28.390107 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:28.700422 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:28.889998 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:29.199662 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:29.390131 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:29.699239 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:29.890070 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:30.208176 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:30.390438 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:30.699509 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:30.890745 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:31.199683 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:31.390240 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:31.698927 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:31.889552 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:32.200440 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:32.389357 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:32.699606 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:32.890596 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:33.198837 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:33.389932 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:33.698946 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:33.889654 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:34.199377 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:34.390365 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:34.700286 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:34.890695 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:35.199638 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:35.389579 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:35.699572 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:35.913625 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:36.199281 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0919 18:40:36.389703 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:36.699318 15785 kapi.go:107] duration metric: took 43.504683414s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0919 18:40:36.889764 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:37.389801 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:37.889236 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:38.388685 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:38.889545 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:39.389995 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:39.889486 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:40.389215 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:40.889229 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:41.389634 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:41.889403 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:42.389505 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:42.889748 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:43.389834 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:43.890568 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:44.389091 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:44.888982 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:45.389760 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:45.889566 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:46.389473 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:46.889239 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:47.388895 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:47.889771 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:48.389873 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:48.888889 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:49.389565 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:49.889052 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:50.388648 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:50.890546 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:51.389642 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:51.889149 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:52.389532 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:52.889211 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:53.388991 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:53.890270 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:54.389964 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:54.890148 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:55.389721 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:55.889445 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:56.392833 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:56.890802 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:57.390778 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:57.890612 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:58.389837 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:58.889194 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:59.389994 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:40:59.889562 15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0919 18:41:00.389962 15785 kapi.go:107] duration metric: took 1m8.504343143s to wait for app.kubernetes.io/name=ingress-nginx ...
I0919 18:41:17.977430 15785 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0919 18:41:17.977450 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:18.477412 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:18.977559 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:19.478051 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:19.977250 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:20.477177 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:20.976908 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:21.477062 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:21.977056 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:22.477301 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:22.977118 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:23.477311 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:23.977103 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:24.476681 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:24.977005 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:25.476727 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:25.977583 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:26.477316 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:26.977430 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:27.477613 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:27.976511 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:28.477268 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:28.977032 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:29.477075 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:29.977036 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:30.476774 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:30.976753 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:31.477789 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:31.976192 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:32.476857 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:32.976599 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:33.477798 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:33.977761 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:34.476709 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:34.977406 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:35.477622 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:35.977431 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:36.476944 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:36.976545 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:37.477757 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:37.976828 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:38.476937 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:38.976871 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:39.477063 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:39.976573 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:40.477233 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:40.976832 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:41.477099 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:41.976341 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:42.477343 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:42.977175 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:43.477110 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:43.976897 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:44.476720 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:44.977444 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:45.477453 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:45.977362 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:46.476931 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:46.976537 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:47.477625 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:47.976762 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:48.477054 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:48.976831 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:49.476820 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:49.976955 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:50.476482 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:50.977204 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:51.477487 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:51.976985 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:52.476702 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:52.976600 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:53.477907 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:53.976807 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:54.476925 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:54.977194 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:55.476948 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:55.976643 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:56.477680 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:56.977260 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:57.477131 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:57.976828 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:58.477969 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:58.976589 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:59.477555 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:41:59.977385 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:00.476995 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:00.977144 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:01.477140 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:01.976972 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:02.476881 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:02.976631 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:03.478121 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:03.976817 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:04.476895 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:04.976589 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:05.478041 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:05.977540 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:06.477564 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:06.976300 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:07.477407 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:07.977542 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:08.477695 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:08.977438 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:09.477300 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:09.976883 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:10.476638 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:10.977649 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:11.477980 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:11.976497 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:12.477621 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:12.977408 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:13.477611 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:13.977664 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:14.477488 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:14.977149 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:15.477402 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:15.976976 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:16.476408 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:16.977365 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:17.477449 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:17.977882 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:18.476701 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:18.977448 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:19.478880 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:19.976617 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:20.477183 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:20.977195 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:21.477274 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:21.976674 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:22.477745 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:22.976405 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:23.477756 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:23.977333 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:24.477445 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:24.976693 15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0919 18:42:25.478145 15785 kapi.go:107] duration metric: took 2m30.504270195s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0919 18:42:25.479629 15785 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-807343 cluster.
I0919 18:42:25.480807 15785 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0919 18:42:25.482012 15785 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0919 18:42:25.483313 15785 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0919 18:42:25.484387 15785 addons.go:510] duration metric: took 2m45.83813789s for enable addons: enabled=[storage-provisioner ingress-dns storage-provisioner-rancher volcano nvidia-device-plugin helm-tiller metrics-server inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0919 18:42:25.484420 15785 start.go:246] waiting for cluster config update ...
I0919 18:42:25.484439 15785 start.go:255] writing updated cluster config ...
I0919 18:42:25.484670 15785 ssh_runner.go:195] Run: rm -f paused
I0919 18:42:25.530236 15785 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0919 18:42:25.532248 15785 out.go:177] * Done! kubectl is now configured to use "addons-807343" cluster and "default" namespace by default
==> Docker <==
Sep 19 18:51:49 addons-807343 dockerd[1337]: time="2024-09-19T18:51:49.894152971Z" level=warning msg="failed to close stdin: NotFound: task 92d5192af51c5ab20eda8cee5705b369ffea8983b981b69536c34201e655ec22 not found: not found"
Sep 19 18:51:51 addons-807343 dockerd[1337]: time="2024-09-19T18:51:51.693232557Z" level=info msg="ignoring event" container=7fc8608673ad2821f5bc4bb8de4fcdccfda3907b3d13ce6f1efbe2b1732b0c93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:51:52 addons-807343 dockerd[1337]: time="2024-09-19T18:51:52.168199715Z" level=info msg="ignoring event" container=ce976de5bdf099212de44cd6465570c744e498e2e20fa8cb2ec263592333622c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:51:52 addons-807343 dockerd[1337]: time="2024-09-19T18:51:52.292990179Z" level=info msg="ignoring event" container=84719236e80e39e6e55593e3a319bb30d70b78156a98706802f5226ebc645465 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.760723631Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.762509046Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.937988436Z" level=info msg="ignoring event" container=7efccd0e37b6586224f49f3c317e5beb17f2ce43acb21314f952e7dd368993f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:51:55 addons-807343 dockerd[1337]: time="2024-09-19T18:51:55.071986891Z" level=info msg="ignoring event" container=f53fc94ea5ace8b46d80aea7fdfbc1aebe19650fdd8951c76b7dc16b3e7e9936 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:51:55 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:51:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d1cbeb90e7864210126b5d957ebd18b0af6352c34e9be2e66c7f12c058b0998f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 19 18:51:57 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:51:57Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
Sep 19 18:51:58 addons-807343 dockerd[1337]: time="2024-09-19T18:51:58.612439525Z" level=info msg="ignoring event" container=4d4f25b9e340db719745c99c8df1d7e1958b6d5c646096a051f1bc16cd0e5d61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:04 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f9c10c881707e2a35f8c287dff2500f782268d81a760b1552f3741a61a52196/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 19 18:52:04 addons-807343 dockerd[1337]: time="2024-09-19T18:52:04.517546509Z" level=info msg="ignoring event" container=3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:04 addons-807343 dockerd[1337]: time="2024-09-19T18:52:04.561411153Z" level=info msg="ignoring event" container=cf40b486b65ecd03cb586caa798d24b72c59b2c1a7a5c4618fb38685e8f8c48f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:04 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:04Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
Sep 19 18:52:05 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:05Z" level=error msg="error getting RW layer size for container ID '3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf': Error response from daemon: No such container: 3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf"
Sep 19 18:52:05 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf'"
Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.491929987Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8
Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.552400017Z" level=info msg="ignoring event" container=10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.696776290Z" level=info msg="ignoring event" container=90f138a209b7e2928406d19e9b74cc6193ce03539da373a38e804468226c0636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:14 addons-807343 dockerd[1337]: time="2024-09-19T18:52:14.800501317Z" level=info msg="ignoring event" container=1bcfbde63b203efc42d0134d6ffb9063f6fd592c7103bc01811f5f2c9c642d40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.276584307Z" level=info msg="ignoring event" container=a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.326458758Z" level=info msg="ignoring event" container=d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.416725796Z" level=info msg="ignoring event" container=30843bcbcdcad55806bc12716a5604065f2f013afbfaf79e9b37d926fabbc30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.492027130Z" level=info msg="ignoring event" container=7c03493c3b3d7c4ad8fb9797a6914e376d92aa6894b41b72baf207d066838ceb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
374e9c462e5db kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 12 seconds ago Running hello-world-app 0 8f9c10c881707 hello-world-app-55bf9c44b4-sxzx2
217553a6fdbd1 nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf 19 seconds ago Running nginx 0 d1cbeb90e7864 nginx
92d5192af51c5 alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f 27 seconds ago Exited helm-test 0 7fc8608673ad2 helm-test
17e78c6f0cb4a gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 568ae086eaca3 gcp-auth-89d5ffd79-qfxtn
5c65d44069279 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 abc41a6e2fd5e ingress-nginx-admission-patch-rbdzf
0237edcc60e02 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 5cf9bd5ed4bb7 ingress-nginx-admission-create-zdffs
d05dc574a112f gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 12 minutes ago Exited registry-proxy 0 7c03493c3b3d7 registry-proxy-bbpkk
a12a2a40d4e0d registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 12 minutes ago Exited registry 0 30843bcbcdcad registry-66c9cd494c-bxkct
67149d9e3be24 6e38f40d628db 12 minutes ago Running storage-provisioner 0 de8102a6441b7 storage-provisioner
88194834292fc c69fa2e9cbf5f 12 minutes ago Running coredns 0 ad31f0c01049c coredns-7c65d6cfc9-cfl84
969a0f35b949e 60c005f310ff3 12 minutes ago Running kube-proxy 0 092bdac6bbb50 kube-proxy-ddktm
32c83be9d6183 175ffd71cce3d 12 minutes ago Running kube-controller-manager 0 aeb138a7ea6d1 kube-controller-manager-addons-807343
d399ae9b2f7d8 6bab7719df100 12 minutes ago Running kube-apiserver 0 f3e9e572b23f2 kube-apiserver-addons-807343
75e79c347dfc4 9aa1fad941575 12 minutes ago Running kube-scheduler 0 d92fba3fe8cca kube-scheduler-addons-807343
d6ed3b2e997db 2e96e5913fc06 12 minutes ago Running etcd 0 bbb9f42c62d82 etcd-addons-807343
==> coredns [88194834292f] <==
Trace[918896613]: [30.000996595s] [30.000996595s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1655127070]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 18:39:43.280) (total time: 30001ms):
Trace[1655127070]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:40:13.281)
Trace[1655127070]: [30.001220278s] [30.001220278s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
[INFO] Reloading complete
[INFO] 127.0.0.1:46539 - 57371 "HINFO IN 2989181266431568175.3170300541988887209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015860839s
[INFO] 10.244.0.26:51184 - 33699 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000349323s
[INFO] 10.244.0.26:60648 - 23627 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000437967s
[INFO] 10.244.0.26:50186 - 36409 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171923s
[INFO] 10.244.0.26:49524 - 47586 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000218252s
[INFO] 10.244.0.26:46207 - 47367 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104984s
[INFO] 10.244.0.26:44094 - 61553 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163497s
[INFO] 10.244.0.26:58751 - 11412 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007490996s
[INFO] 10.244.0.26:32809 - 60728 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007976071s
[INFO] 10.244.0.26:44893 - 25751 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007074952s
[INFO] 10.244.0.26:42913 - 26308 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00726801s
[INFO] 10.244.0.26:36300 - 60089 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005524268s
[INFO] 10.244.0.26:38304 - 13061 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006228787s
[INFO] 10.244.0.26:42889 - 5566 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000621578s
[INFO] 10.244.0.26:54898 - 50128 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000777141s
==> describe nodes <==
Name: addons-807343
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-807343
kubernetes.io/os=linux
minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
minikube.k8s.io/name=addons-807343
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_19T18_39_35_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-807343
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 19 Sep 2024 18:39:32 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-807343
AcquireTime: <unset>
RenewTime: Thu, 19 Sep 2024 18:52:08 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 19 Sep 2024 18:52:10 +0000 Thu, 19 Sep 2024 18:39:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 19 Sep 2024 18:52:10 +0000 Thu, 19 Sep 2024 18:39:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 19 Sep 2024 18:52:10 +0000 Thu, 19 Sep 2024 18:39:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 19 Sep 2024 18:52:10 +0000 Thu, 19 Sep 2024 18:39:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-807343
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
System Info:
Machine ID: 22c49f1956b94547a3f39e5d27ac1425
System UUID: 4ffd36f8-513f-4a96-96f2-486a850e4563
Boot ID: 2196c4a9-2227-4889-b22e-1ff833eab33f
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default hello-world-app-55bf9c44b4-sxzx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21s
gcp-auth gcp-auth-89d5ffd79-qfxtn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-cfl84 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 12m
kube-system etcd-addons-807343 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 12m
kube-system kube-apiserver-addons-807343 250m (3%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-807343 200m (2%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-ddktm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-807343 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%) 0 (0%)
memory 170Mi (0%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-807343 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-807343 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-807343 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-807343 event: Registered Node addons-807343 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 8e cd 6f 69 24 08 06
[ +1.312142] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 de b3 ef fb 08 06
[ +5.250752] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 f0 c0 b8 a9 31 08 06
[ +0.638551] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 7b d2 18 97 ab 08 06
[ +0.319330] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 ec 47 a4 0f 0d 08 06
[ +20.972224] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 33 c8 f7 44 59 08 06
[ +3.877385] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 74 96 86 3a 2b 08 06
[Sep19 18:41] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 f9 1b 77 83 e0 08 06
[ +0.060405] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ca a0 1b 31 c3 d4 08 06
[Sep19 18:42] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 83 1f ba 10 b6 08 06
[ +0.000481] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
[ +0.000018] ll header: 00000000: ff ff ff ff ff ff 36 8b b9 4e 6d 88 08 06
[Sep19 18:51] IPv4: martian source 10.244.0.1 from 10.244.0.36, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 99 cd b7 2e 48 08 06
[Sep19 18:52] IPv4: martian source 10.244.0.37 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 74 96 86 3a 2b 08 06
==> etcd [d6ed3b2e997d] <==
{"level":"info","ts":"2024-09-19T18:39:30.402231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-19T18:39:30.403185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-09-19T18:39:50.076986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.137521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-807343\" ","response":"range_response_count:1 size:4404"}
{"level":"info","ts":"2024-09-19T18:39:50.077068Z","caller":"traceutil/trace.go:171","msg":"trace[16821754] range","detail":"{range_begin:/registry/minions/addons-807343; range_end:; response_count:1; response_revision:744; }","duration":"103.227171ms","start":"2024-09-19T18:39:49.973826Z","end":"2024-09-19T18:39:50.077053Z","steps":["trace[16821754] 'agreement among raft nodes before linearized reading' (duration: 95.691891ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:50.077392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/snapshot-controller-56fcc65765\" ","response":"range_response_count:1 size:2108"}
{"level":"info","ts":"2024-09-19T18:39:50.077430Z","caller":"traceutil/trace.go:171","msg":"trace[1823657821] range","detail":"{range_begin:/registry/replicasets/kube-system/snapshot-controller-56fcc65765; range_end:; response_count:1; response_revision:745; }","duration":"103.375802ms","start":"2024-09-19T18:39:49.974042Z","end":"2024-09-19T18:39:50.077418Z","steps":["trace[1823657821] 'agreement among raft nodes before linearized reading' (duration: 103.280269ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:50.667358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.994633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005939789616 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/volcano-system/volcano-controllers.17f6b93fb3b23040\" mod_revision:0 > success:<request_put:<key:\"/registry/events/volcano-system/volcano-controllers.17f6b93fb3b23040\" value_size:651 lease:8128032005939788591 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2024-09-19T18:39:50.667600Z","caller":"traceutil/trace.go:171","msg":"trace[1402537900] transaction","detail":"{read_only:false; response_revision:777; number_of_response:1; }","duration":"196.580902ms","start":"2024-09-19T18:39:50.471006Z","end":"2024-09-19T18:39:50.667587Z","steps":["trace[1402537900] 'process raft request' (duration: 196.529126ms)"],"step_count":1}
{"level":"info","ts":"2024-09-19T18:39:50.667789Z","caller":"traceutil/trace.go:171","msg":"trace[496157971] transaction","detail":"{read_only:false; response_revision:775; number_of_response:1; }","duration":"197.734182ms","start":"2024-09-19T18:39:50.470045Z","end":"2024-09-19T18:39:50.667779Z","steps":["trace[496157971] 'process raft request' (duration: 25.740904ms)","trace[496157971] 'compare' (duration: 102.89098ms)"],"step_count":2}
{"level":"info","ts":"2024-09-19T18:39:50.667893Z","caller":"traceutil/trace.go:171","msg":"trace[2115824162] linearizableReadLoop","detail":"{readStateIndex:790; appliedIndex:789; }","duration":"197.34399ms","start":"2024-09-19T18:39:50.470542Z","end":"2024-09-19T18:39:50.667886Z","steps":["trace[2115824162] 'read index received' (duration: 25.238293ms)","trace[2115824162] 'applied index is now lower than readState.Index' (duration: 172.104628ms)"],"step_count":2}
{"level":"info","ts":"2024-09-19T18:39:50.667998Z","caller":"traceutil/trace.go:171","msg":"trace[1686226436] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"197.202394ms","start":"2024-09-19T18:39:50.470789Z","end":"2024-09-19T18:39:50.667991Z","steps":["trace[1686226436] 'process raft request' (duration: 196.661665ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:50.668337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.500082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7c65d6cfc9-cfl84.17f6b93ef9b1a18d\" ","response":"range_response_count:1 size:787"}
{"level":"info","ts":"2024-09-19T18:39:50.668375Z","caller":"traceutil/trace.go:171","msg":"trace[333774830] range","detail":"{range_begin:/registry/events/kube-system/coredns-7c65d6cfc9-cfl84.17f6b93ef9b1a18d; range_end:; response_count:1; response_revision:777; }","duration":"195.55941ms","start":"2024-09-19T18:39:50.472804Z","end":"2024-09-19T18:39:50.668363Z","steps":["trace[333774830] 'agreement among raft nodes before linearized reading' (duration: 195.433353ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:50.668578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.029526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/volcano-system\" ","response":"range_response_count:1 size:664"}
{"level":"info","ts":"2024-09-19T18:39:50.668602Z","caller":"traceutil/trace.go:171","msg":"trace[381116697] range","detail":"{range_begin:/registry/namespaces/volcano-system; range_end:; response_count:1; response_revision:777; }","duration":"198.055458ms","start":"2024-09-19T18:39:50.470539Z","end":"2024-09-19T18:39:50.668594Z","steps":["trace[381116697] 'agreement among raft nodes before linearized reading' (duration: 197.974633ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:50.668747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.341749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
{"level":"info","ts":"2024-09-19T18:39:50.668774Z","caller":"traceutil/trace.go:171","msg":"trace[130722087] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:777; }","duration":"188.370893ms","start":"2024-09-19T18:39:50.480395Z","end":"2024-09-19T18:39:50.668766Z","steps":["trace[130722087] 'agreement among raft nodes before linearized reading' (duration: 188.295183ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-19T18:39:58.071358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.964887ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005939790079 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" mod_revision:922 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" value_size:731 lease:8128032005939788591 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" > >>","response":"size:16"}
{"level":"info","ts":"2024-09-19T18:39:58.071445Z","caller":"traceutil/trace.go:171","msg":"trace[1960524016] linearizableReadLoop","detail":"{readStateIndex:988; appliedIndex:987; }","duration":"149.322364ms","start":"2024-09-19T18:39:57.922112Z","end":"2024-09-19T18:39:58.071434Z","steps":["trace[1960524016] 'read index received' (duration: 39.089852ms)","trace[1960524016] 'applied index is now lower than readState.Index' (duration: 110.231519ms)"],"step_count":2}
{"level":"info","ts":"2024-09-19T18:39:58.071478Z","caller":"traceutil/trace.go:171","msg":"trace[1242200740] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"149.743114ms","start":"2024-09-19T18:39:57.921717Z","end":"2024-09-19T18:39:58.071460Z","steps":["trace[1242200740] 'process raft request' (duration: 39.494631ms)","trace[1242200740] 'compare' (duration: 109.83951ms)"],"step_count":2}
{"level":"warn","ts":"2024-09-19T18:39:58.071603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.480294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f6b940c056e1c9\" ","response":"range_response_count:1 size:912"}
{"level":"info","ts":"2024-09-19T18:39:58.071639Z","caller":"traceutil/trace.go:171","msg":"trace[90722266] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch.17f6b940c056e1c9; range_end:; response_count:1; response_revision:971; }","duration":"149.519404ms","start":"2024-09-19T18:39:57.922109Z","end":"2024-09-19T18:39:58.071629Z","steps":["trace[90722266] 'agreement among raft nodes before linearized reading' (duration: 149.366006ms)"],"step_count":1}
{"level":"info","ts":"2024-09-19T18:49:30.795252Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1887}
{"level":"info","ts":"2024-09-19T18:49:30.820176Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1887,"took":"24.431283ms","hash":1617464157,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4853760,"current-db-size-in-use":"4.9 MB"}
{"level":"info","ts":"2024-09-19T18:49:30.820215Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1617464157,"revision":1887,"compact-revision":-1}
==> gcp-auth [17e78c6f0cb4] <==
2024/09/19 18:43:02 Ready to write response ...
2024/09/19 18:51:04 Ready to marshal response ...
2024/09/19 18:51:04 Ready to write response ...
2024/09/19 18:51:04 Ready to marshal response ...
2024/09/19 18:51:04 Ready to write response ...
2024/09/19 18:51:12 Ready to marshal response ...
2024/09/19 18:51:12 Ready to write response ...
2024/09/19 18:51:14 Ready to marshal response ...
2024/09/19 18:51:14 Ready to write response ...
2024/09/19 18:51:18 Ready to marshal response ...
2024/09/19 18:51:18 Ready to write response ...
2024/09/19 18:51:21 Ready to marshal response ...
2024/09/19 18:51:21 Ready to write response ...
2024/09/19 18:51:21 Ready to marshal response ...
2024/09/19 18:51:21 Ready to write response ...
2024/09/19 18:51:21 Ready to marshal response ...
2024/09/19 18:51:21 Ready to write response ...
2024/09/19 18:51:32 Ready to marshal response ...
2024/09/19 18:51:32 Ready to write response ...
2024/09/19 18:51:47 Ready to marshal response ...
2024/09/19 18:51:47 Ready to write response ...
2024/09/19 18:51:55 Ready to marshal response ...
2024/09/19 18:51:55 Ready to write response ...
2024/09/19 18:52:03 Ready to marshal response ...
2024/09/19 18:52:03 Ready to write response ...
==> kernel <==
18:52:16 up 34 min, 0 users, load average: 2.17, 0.89, 0.49
Linux addons-807343 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [d399ae9b2f7d] <==
W0919 18:42:53.988091 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0919 18:42:53.988253 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0919 18:42:54.082287 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0919 18:42:54.187729 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0919 18:42:54.477350 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0919 18:42:54.768400 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0919 18:51:21.304157 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.22.220"}
I0919 18:51:25.926806 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E0919 18:51:28.318436 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I0919 18:51:48.267573 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0919 18:51:48.267628 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0919 18:51:48.283280 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0919 18:51:48.283321 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0919 18:51:48.294616 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0919 18:51:48.294665 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0919 18:51:48.307991 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0919 18:51:48.308028 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0919 18:51:49.285199 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0919 18:51:49.308292 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0919 18:51:49.370183 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0919 18:51:55.167707 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0919 18:51:55.320624 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.87.156"}
I0919 18:51:58.570428 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0919 18:51:59.586084 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0919 18:52:03.863128 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.231.140"}
==> kube-controller-manager [32c83be9d618] <==
I0919 18:52:03.736658 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.559µs"
I0919 18:52:03.771883 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.93µs"
I0919 18:52:05.014800 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.24168ms"
I0919 18:52:05.014883 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.008µs"
I0919 18:52:05.447440 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
I0919 18:52:05.448654 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.62µs"
I0919 18:52:05.470060 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
W0919 18:52:06.477741 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0919 18:52:06.477783 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0919 18:52:06.478548 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0919 18:52:06.478577 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0919 18:52:07.990136 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0919 18:52:07.990170 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0919 18:52:08.925246 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
I0919 18:52:09.829037 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0919 18:52:09.829072 1 shared_informer.go:320] Caches are synced for resource quota
W0919 18:52:09.855357 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0919 18:52:09.855392 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0919 18:52:09.967249 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0919 18:52:09.967286 1 shared_informer.go:320] Caches are synced for garbage collector
I0919 18:52:10.336072 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-807343"
I0919 18:52:15.236863 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.686µs"
I0919 18:52:15.526959 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
W0919 18:52:15.947356 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0919 18:52:15.947390 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [969a0f35b949] <==
I0919 18:39:43.792337 1 server_linux.go:66] "Using iptables proxy"
I0919 18:39:44.469618 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0919 18:39:44.469761 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0919 18:39:44.970481 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0919 18:39:44.970535 1 server_linux.go:169] "Using iptables Proxier"
I0919 18:39:44.973731 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0919 18:39:44.974101 1 server.go:483] "Version info" version="v1.31.1"
I0919 18:39:44.974117 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0919 18:39:44.976275 1 config.go:199] "Starting service config controller"
I0919 18:39:44.976291 1 shared_informer.go:313] Waiting for caches to sync for service config
I0919 18:39:44.976310 1 config.go:105] "Starting endpoint slice config controller"
I0919 18:39:44.976315 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0919 18:39:44.976563 1 config.go:328] "Starting node config controller"
I0919 18:39:44.976571 1 shared_informer.go:313] Waiting for caches to sync for node config
I0919 18:39:44.981240 1 shared_informer.go:320] Caches are synced for node config
I0919 18:39:45.076846 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0919 18:39:45.076943 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [75e79c347dfc] <==
W0919 18:39:32.468462 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0919 18:39:32.468624 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.467742 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0919 18:39:32.468732 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.467794 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0919 18:39:32.468775 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.467850 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0919 18:39:32.468803 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.468015 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0919 18:39:32.468841 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.468260 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0919 18:39:32.468872 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:32.468259 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0919 18:39:32.468899 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:33.306462 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0919 18:39:33.306502 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:33.325723 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0919 18:39:33.325763 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0919 18:39:33.337982 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0919 18:39:33.338011 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0919 18:39:33.486558 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0919 18:39:33.486601 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0919 18:39:33.498988 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0919 18:39:33.499017 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0919 18:39:34.093966 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 19 18:52:09 addons-807343 kubelet[2431]: I0919 18:52:09.058259 2431 scope.go:117] "RemoveContainer" containerID="10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
Sep 19 18:52:09 addons-807343 kubelet[2431]: E0919 18:52:09.058779 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8" containerID="10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
Sep 19 18:52:09 addons-807343 kubelet[2431]: I0919 18:52:09.058819 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"} err="failed to get container status \"10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
Sep 19 18:52:10 addons-807343 kubelet[2431]: I0919 18:52:10.716132 2431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097e62a7-8e4d-4361-884e-3f59d6fd556a" path="/var/lib/kubelet/pods/097e62a7-8e4d-4361-884e-3f59d6fd556a/volumes"
Sep 19 18:52:14 addons-807343 kubelet[2431]: E0919 18:52:14.710119 2431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="061e7e93-a91f-442f-ab9f-2c492cf63438"
Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936609 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds\") pod \"e48103cd-6304-4934-990b-0d83789f05d3\" (UID: \"e48103cd-6304-4934-990b-0d83789f05d3\") "
Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936676 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g699b\" (UniqueName: \"kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b\") pod \"e48103cd-6304-4934-990b-0d83789f05d3\" (UID: \"e48103cd-6304-4934-990b-0d83789f05d3\") "
Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936728 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e48103cd-6304-4934-990b-0d83789f05d3" (UID: "e48103cd-6304-4934-990b-0d83789f05d3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.938403 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b" (OuterVolumeSpecName: "kube-api-access-g699b") pod "e48103cd-6304-4934-990b-0d83789f05d3" (UID: "e48103cd-6304-4934-990b-0d83789f05d3"). InnerVolumeSpecName "kube-api-access-g699b". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.037825 2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g699b\" (UniqueName: \"kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b\") on node \"addons-807343\" DevicePath \"\""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.037869 2431 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds\") on node \"addons-807343\" DevicePath \"\""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.642191 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hnj\" (UniqueName: \"kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj\") pod \"073b4ea3-119e-40f8-9331-51fd7dfdf5bf\" (UID: \"073b4ea3-119e-40f8-9331-51fd7dfdf5bf\") "
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.642237 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475lf\" (UniqueName: \"kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf\") pod \"5daab8c5-d486-4f2e-a165-b7129bb49ef1\" (UID: \"5daab8c5-d486-4f2e-a165-b7129bb49ef1\") "
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.644304 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf" (OuterVolumeSpecName: "kube-api-access-475lf") pod "5daab8c5-d486-4f2e-a165-b7129bb49ef1" (UID: "5daab8c5-d486-4f2e-a165-b7129bb49ef1"). InnerVolumeSpecName "kube-api-access-475lf". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.644355 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj" (OuterVolumeSpecName: "kube-api-access-b9hnj") pod "073b4ea3-119e-40f8-9331-51fd7dfdf5bf" (UID: "073b4ea3-119e-40f8-9331-51fd7dfdf5bf"). InnerVolumeSpecName "kube-api-access-b9hnj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.743245 2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b9hnj\" (UniqueName: \"kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj\") on node \"addons-807343\" DevicePath \"\""
Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.743285 2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-475lf\" (UniqueName: \"kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf\") on node \"addons-807343\" DevicePath \"\""
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.117485 2431 scope.go:117] "RemoveContainer" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.137448 2431 scope.go:117] "RemoveContainer" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
Sep 19 18:52:16 addons-807343 kubelet[2431]: E0919 18:52:16.138167 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.138202 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"} err="failed to get container status \"a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15\": rpc error: code = Unknown desc = Error response from daemon: No such container: a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.138224 2431 scope.go:117] "RemoveContainer" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.176008 2431 scope.go:117] "RemoveContainer" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
Sep 19 18:52:16 addons-807343 kubelet[2431]: E0919 18:52:16.176652 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.176686 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"} err="failed to get container status \"d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4\": rpc error: code = Unknown desc = Error response from daemon: No such container: d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
==> storage-provisioner [67149d9e3be2] <==
I0919 18:39:48.488021 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0919 18:39:48.577006 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0919 18:39:48.577054 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0919 18:39:48.584649 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0919 18:39:48.584785 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960!
I0919 18:39:48.585520 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f97acc-3900-4852-a79f-87c3d35f67c3", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960 became leader
I0919 18:39:48.685092 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-807343 -n addons-807343
helpers_test.go:261: (dbg) Run: kubectl --context addons-807343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-807343 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-807343 describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-807343/192.168.49.2
Start Time: Thu, 19 Sep 2024 18:43:02 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.28
IPs:
IP: 10.244.0.28
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rl8zt (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-rl8zt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to addons-807343
Normal Pulling 7m43s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m43s (x4 over 9m14s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m43s (x4 over 9m14s) kubelet Error: ErrImagePull
Warning Failed 7m31s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m6s (x21 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.33s)