=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.295153ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004201297s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00325487s
addons_test.go:338: (dbg) Run: kubectl --context addons-193618 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.122557201s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-193618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-arm64 -p addons-193618 ip
2024/09/23 10:34:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-arm64 -p addons-193618 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-193618
helpers_test.go:235: (dbg) docker inspect addons-193618:
-- stdout --
[
{
"Id": "26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8",
"Created": "2024-09-23T10:21:10.351973489Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8777,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-23T10:21:10.522608749Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
"ResolvConfPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/hostname",
"HostsPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/hosts",
"LogPath": "/var/lib/docker/containers/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8/26ced008089d8b423fd113de7e8028b87f63e5f82a8c95e9d3acafe98a5b59f8-json.log",
"Name": "/addons-193618",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-193618:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-193618",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207-init/diff:/var/lib/docker/overlay2/6f03a4ef8a140fe5450018392e20b0528047b3be7fcd35f8ec674bbe5ee3d5d2/diff",
"MergedDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/merged",
"UpperDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/diff",
"WorkDir": "/var/lib/docker/overlay2/a511ccf62f412ec5a50f2b7bc16585d6fbe98040c81c3f78b8dd651a4595b207/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-193618",
"Source": "/var/lib/docker/volumes/addons-193618/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-193618",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-193618",
"name.minikube.sigs.k8s.io": "addons-193618",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1e9fdd0d3c0d3e742257c4aaf9b9d4dc4c797c56eb7c1ac271bbf53bc2e23b8d",
"SandboxKey": "/var/run/docker/netns/1e9fdd0d3c0d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-193618": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "8d168128c28da18e071830519b7328adca53718bb0836815783db8b049afc06a",
"EndpointID": "10f0b59df9c2bfdc2674a5486ad12df98c281c58155240fee45500c8a048add7",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-193618",
"26ced008089d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-193618 -n addons-193618
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-193618 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-193618 logs -n 25: (1.150115144s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | -p download-only-710688 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p download-only-710688 | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | -o=json --download-only | download-only-126776 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | -p download-only-126776 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p download-only-126776 | download-only-126776 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p download-only-710688 | download-only-710688 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p download-only-126776 | download-only-126776 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | --download-only -p | download-docker-631157 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | download-docker-631157 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-631157 | download-docker-631157 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | --download-only -p | binary-mirror-590765 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | binary-mirror-590765 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:33447 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-590765 | binary-mirror-590765 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| addons | enable dashboard -p | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | addons-193618 | | | | | |
| addons | disable dashboard -p | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | addons-193618 | | | | | |
| start | -p addons-193618 --wait=true | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:24 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-193618 addons disable | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:24 UTC | 23 Sep 24 10:25 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | enable headlamp | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
| | -p addons-193618 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-193618 addons disable | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-193618 addons | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-193618 addons | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-193618 ip | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
| addons | addons-193618 addons disable | addons-193618 | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 10:20:44
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 10:20:44.962126 8275 out.go:345] Setting OutFile to fd 1 ...
I0923 10:20:44.962335 8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:20:44.962384 8275 out.go:358] Setting ErrFile to fd 2...
I0923 10:20:44.962404 8275 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:20:44.962670 8275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-2206/.minikube/bin
I0923 10:20:44.963127 8275 out.go:352] Setting JSON to false
I0923 10:20:44.963912 8275 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":193,"bootTime":1727086652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0923 10:20:44.964006 8275 start.go:139] virtualization:
I0923 10:20:44.967538 8275 out.go:177] * [addons-193618] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0923 10:20:44.969391 8275 out.go:177] - MINIKUBE_LOCATION=19689
I0923 10:20:44.969453 8275 notify.go:220] Checking for updates...
I0923 10:20:44.972072 8275 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 10:20:44.974315 8275 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19689-2206/kubeconfig
I0923 10:20:44.976239 8275 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-2206/.minikube
I0923 10:20:44.978211 8275 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0923 10:20:44.980137 8275 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 10:20:44.982367 8275 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 10:20:45.050981 8275 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0923 10:20:45.051144 8275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 10:20:45.195189 8275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:20:45.184256429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 10:20:45.195340 8275 docker.go:318] overlay module found
I0923 10:20:45.197697 8275 out.go:177] * Using the docker driver based on user configuration
I0923 10:20:45.199831 8275 start.go:297] selected driver: docker
I0923 10:20:45.199858 8275 start.go:901] validating driver "docker" against <nil>
I0923 10:20:45.199874 8275 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 10:20:45.200611 8275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 10:20:45.322427 8275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 10:20:45.310042631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 10:20:45.322751 8275 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 10:20:45.323003 8275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 10:20:45.325098 8275 out.go:177] * Using Docker driver with root privileges
I0923 10:20:45.327245 8275 cni.go:84] Creating CNI manager for ""
I0923 10:20:45.327332 8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:20:45.327345 8275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0923 10:20:45.327445 8275 start.go:340] cluster config:
{Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 10:20:45.329874 8275 out.go:177] * Starting "addons-193618" primary control-plane node in "addons-193618" cluster
I0923 10:20:45.331989 8275 cache.go:121] Beginning downloading kic base image for docker with docker
I0923 10:20:45.336452 8275 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
I0923 10:20:45.338592 8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:20:45.338680 8275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 10:20:45.338692 8275 cache.go:56] Caching tarball of preloaded images
I0923 10:20:45.338737 8275 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
I0923 10:20:45.338797 8275 preload.go:172] Found /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 10:20:45.338809 8275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 10:20:45.339241 8275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json ...
I0923 10:20:45.339315 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json: {Name:mke8b7301d3a5167a1f1aba5f23a929aa585f3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:20:45.360091 8275 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
I0923 10:20:45.360333 8275 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
I0923 10:20:45.360373 8275 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
I0923 10:20:45.360402 8275 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
I0923 10:20:45.360412 8275 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
I0923 10:20:45.360454 8275 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
I0923 10:21:02.600728 8275 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
I0923 10:21:02.600790 8275 cache.go:194] Successfully downloaded all kic artifacts
I0923 10:21:02.600823 8275 start.go:360] acquireMachinesLock for addons-193618: {Name:mk48dd4aba024ddd995eaf88bfc43ada7e8ca838 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 10:21:02.600968 8275 start.go:364] duration metric: took 122.798µs to acquireMachinesLock for "addons-193618"
I0923 10:21:02.600997 8275 start.go:93] Provisioning new machine with config: &{Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 10:21:02.601070 8275 start.go:125] createHost starting for "" (driver="docker")
I0923 10:21:02.604392 8275 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0923 10:21:02.604668 8275 start.go:159] libmachine.API.Create for "addons-193618" (driver="docker")
I0923 10:21:02.604708 8275 client.go:168] LocalClient.Create starting
I0923 10:21:02.604837 8275 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem
I0923 10:21:04.047510 8275 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem
I0923 10:21:04.291954 8275 cli_runner.go:164] Run: docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 10:21:04.309422 8275 cli_runner.go:211] docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 10:21:04.309509 8275 network_create.go:284] running [docker network inspect addons-193618] to gather additional debugging logs...
I0923 10:21:04.309531 8275 cli_runner.go:164] Run: docker network inspect addons-193618
W0923 10:21:04.324774 8275 cli_runner.go:211] docker network inspect addons-193618 returned with exit code 1
I0923 10:21:04.324807 8275 network_create.go:287] error running [docker network inspect addons-193618]: docker network inspect addons-193618: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-193618 not found
I0923 10:21:04.324821 8275 network_create.go:289] output of [docker network inspect addons-193618]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-193618 not found
** /stderr **
I0923 10:21:04.324924 8275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:21:04.340876 8275 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004b9240}
I0923 10:21:04.340930 8275 network_create.go:124] attempt to create docker network addons-193618 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 10:21:04.341047 8275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-193618 addons-193618
I0923 10:21:04.413242 8275 network_create.go:108] docker network addons-193618 192.168.49.0/24 created
I0923 10:21:04.413272 8275 kic.go:121] calculated static IP "192.168.49.2" for the "addons-193618" container
I0923 10:21:04.413357 8275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0923 10:21:04.427026 8275 cli_runner.go:164] Run: docker volume create addons-193618 --label name.minikube.sigs.k8s.io=addons-193618 --label created_by.minikube.sigs.k8s.io=true
I0923 10:21:04.445042 8275 oci.go:103] Successfully created a docker volume addons-193618
I0923 10:21:04.445146 8275 cli_runner.go:164] Run: docker run --rm --name addons-193618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --entrypoint /usr/bin/test -v addons-193618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
I0923 10:21:06.574836 8275 cli_runner.go:217] Completed: docker run --rm --name addons-193618-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --entrypoint /usr/bin/test -v addons-193618:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.129645865s)
I0923 10:21:06.574866 8275 oci.go:107] Successfully prepared a docker volume addons-193618
I0923 10:21:06.574885 8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:21:06.574905 8275 kic.go:194] Starting extracting preloaded images to volume ...
I0923 10:21:06.574974 8275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-193618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
I0923 10:21:10.282100 8275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-2206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-193618:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.707083791s)
I0923 10:21:10.282132 8275 kic.go:203] duration metric: took 3.707225404s to extract preloaded images to volume ...
W0923 10:21:10.282287 8275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0923 10:21:10.282411 8275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0923 10:21:10.336323 8275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-193618 --name addons-193618 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-193618 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-193618 --network addons-193618 --ip 192.168.49.2 --volume addons-193618:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
I0923 10:21:10.691915 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Running}}
I0923 10:21:10.719577 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:10.745328 8275 cli_runner.go:164] Run: docker exec addons-193618 stat /var/lib/dpkg/alternatives/iptables
I0923 10:21:10.811993 8275 oci.go:144] the created container "addons-193618" has a running status.
I0923 10:21:10.812024 8275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa...
I0923 10:21:11.880902 8275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0923 10:21:11.910796 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:11.928490 8275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0923 10:21:11.928513 8275 kic_runner.go:114] Args: [docker exec --privileged addons-193618 chown docker:docker /home/docker/.ssh/authorized_keys]
I0923 10:21:11.990769 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:12.015654 8275 machine.go:93] provisionDockerMachine start ...
I0923 10:21:12.015774 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:12.036679 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:12.036993 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:12.037010 8275 main.go:141] libmachine: About to run SSH command:
hostname
I0923 10:21:12.172736 8275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-193618
I0923 10:21:12.172763 8275 ubuntu.go:169] provisioning hostname "addons-193618"
I0923 10:21:12.172834 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:12.190417 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:12.190678 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:12.190696 8275 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-193618 && echo "addons-193618" | sudo tee /etc/hostname
I0923 10:21:12.338311 8275 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-193618
I0923 10:21:12.338393 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:12.356488 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:12.356737 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:12.356760 8275 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-193618' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-193618/g' /etc/hosts;
else
echo '127.0.1.1 addons-193618' | sudo tee -a /etc/hosts;
fi
fi
I0923 10:21:12.493163 8275 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0923 10:21:12.493187 8275 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-2206/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-2206/.minikube}
I0923 10:21:12.493205 8275 ubuntu.go:177] setting up certificates
I0923 10:21:12.493216 8275 provision.go:84] configureAuth start
I0923 10:21:12.493272 8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
I0923 10:21:12.510414 8275 provision.go:143] copyHostCerts
I0923 10:21:12.510500 8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/ca.pem (1078 bytes)
I0923 10:21:12.510680 8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/cert.pem (1123 bytes)
I0923 10:21:12.510744 8275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-2206/.minikube/key.pem (1675 bytes)
I0923 10:21:12.510797 8275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem org=jenkins.addons-193618 san=[127.0.0.1 192.168.49.2 addons-193618 localhost minikube]
I0923 10:21:13.296721 8275 provision.go:177] copyRemoteCerts
I0923 10:21:13.296791 8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0923 10:21:13.296832 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:13.314068 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:13.409871 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0923 10:21:13.434989 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0923 10:21:13.458893 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0923 10:21:13.483616 8275 provision.go:87] duration metric: took 990.387218ms to configureAuth
I0923 10:21:13.483643 8275 ubuntu.go:193] setting minikube options for container-runtime
I0923 10:21:13.483830 8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:21:13.483894 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:13.504477 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:13.504728 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:13.504747 8275 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0923 10:21:13.637687 8275 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0923 10:21:13.637710 8275 ubuntu.go:71] root file system type: overlay
I0923 10:21:13.637823 8275 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0923 10:21:13.637893 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:13.655165 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:13.655404 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:13.655484 8275 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0923 10:21:13.800744 8275 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0923 10:21:13.800836 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:13.817289 8275 main.go:141] libmachine: Using SSH client type: native
I0923 10:21:13.817574 8275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 10:21:13.817599 8275 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0923 10:21:14.602184 8275 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-19 14:24:16.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-23 10:21:13.793820605 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0923 10:21:14.602265 8275 machine.go:96] duration metric: took 2.586585811s to provisionDockerMachine
I0923 10:21:14.602293 8275 client.go:171] duration metric: took 11.997574442s to LocalClient.Create
I0923 10:21:14.602321 8275 start.go:167] duration metric: took 11.997654828s to libmachine.API.Create "addons-193618"
I0923 10:21:14.602353 8275 start.go:293] postStartSetup for "addons-193618" (driver="docker")
I0923 10:21:14.602379 8275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 10:21:14.602466 8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 10:21:14.602541 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:14.619671 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:14.713987 8275 ssh_runner.go:195] Run: cat /etc/os-release
I0923 10:21:14.717161 8275 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 10:21:14.717197 8275 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 10:21:14.717210 8275 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 10:21:14.717230 8275 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0923 10:21:14.717244 8275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2206/.minikube/addons for local assets ...
I0923 10:21:14.717318 8275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-2206/.minikube/files for local assets ...
I0923 10:21:14.717347 8275 start.go:296] duration metric: took 114.974073ms for postStartSetup
I0923 10:21:14.717658 8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
I0923 10:21:14.733522 8275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/config.json ...
I0923 10:21:14.733796 8275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0923 10:21:14.733846 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:14.749927 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:14.841805 8275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0923 10:21:14.846242 8275 start.go:128] duration metric: took 12.245156029s to createHost
I0923 10:21:14.846270 8275 start.go:83] releasing machines lock for "addons-193618", held for 12.24528901s
I0923 10:21:14.846339 8275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-193618
I0923 10:21:14.863090 8275 ssh_runner.go:195] Run: cat /version.json
I0923 10:21:14.863152 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:14.863403 8275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0923 10:21:14.863477 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:14.884145 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:14.894601 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:14.976522 8275 ssh_runner.go:195] Run: systemctl --version
I0923 10:21:15.112557 8275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 10:21:15.118103 8275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0923 10:21:15.150524 8275 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0923 10:21:15.150644 8275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 10:21:15.185900 8275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0923 10:21:15.185941 8275 start.go:495] detecting cgroup driver to use...
I0923 10:21:15.185983 8275 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 10:21:15.186097 8275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 10:21:15.204497 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 10:21:15.215246 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 10:21:15.225593 8275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 10:21:15.225663 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 10:21:15.235613 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 10:21:15.245769 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 10:21:15.255562 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 10:21:15.265489 8275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 10:21:15.274480 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 10:21:15.284082 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 10:21:15.293674 8275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 10:21:15.303210 8275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 10:21:15.311456 8275 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0923 10:21:15.311540 8275 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0923 10:21:15.325183 8275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 10:21:15.333724 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:15.416555 8275 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0923 10:21:15.518505 8275 start.go:495] detecting cgroup driver to use...
I0923 10:21:15.518598 8275 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 10:21:15.518670 8275 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0923 10:21:15.531840 8275 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0923 10:21:15.531947 8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0923 10:21:15.551371 8275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 10:21:15.568800 8275 ssh_runner.go:195] Run: which cri-dockerd
I0923 10:21:15.574906 8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0923 10:21:15.584573 8275 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0923 10:21:15.612064 8275 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0923 10:21:15.713676 8275 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0923 10:21:15.820450 8275 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0923 10:21:15.820650 8275 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0923 10:21:15.841683 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:15.928564 8275 ssh_runner.go:195] Run: sudo systemctl restart docker
I0923 10:21:16.197732 8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0923 10:21:16.210185 8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 10:21:16.222651 8275 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0923 10:21:16.317657 8275 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0923 10:21:16.409930 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:16.503443 8275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0923 10:21:16.517391 8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 10:21:16.529144 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:16.621950 8275 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0923 10:21:16.701099 8275 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0923 10:21:16.701258 8275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0923 10:21:16.706348 8275 start.go:563] Will wait 60s for crictl version
I0923 10:21:16.706484 8275 ssh_runner.go:195] Run: which crictl
I0923 10:21:16.710187 8275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0923 10:21:16.745214 8275 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.0
RuntimeApiVersion: v1
I0923 10:21:16.745286 8275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 10:21:16.768526 8275 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 10:21:16.793108 8275 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
I0923 10:21:16.793212 8275 cli_runner.go:164] Run: docker network inspect addons-193618 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:21:16.808755 8275 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0923 10:21:16.812130 8275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 10:21:16.822926 8275 kubeadm.go:883] updating cluster {Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 10:21:16.823039 8275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:21:16.823091 8275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 10:21:16.840249 8275 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 10:21:16.840270 8275 docker.go:615] Images already preloaded, skipping extraction
I0923 10:21:16.840334 8275 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 10:21:16.858635 8275 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 10:21:16.858661 8275 cache_images.go:84] Images are preloaded, skipping loading
I0923 10:21:16.858671 8275 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0923 10:21:16.858765 8275 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-193618 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0923 10:21:16.858835 8275 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0923 10:21:16.901788 8275 cni.go:84] Creating CNI manager for ""
I0923 10:21:16.901822 8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:21:16.901835 8275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 10:21:16.901855 8275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-193618 NodeName:addons-193618 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 10:21:16.902012 8275 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-193618"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 10:21:16.902081 8275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 10:21:16.911070 8275 binaries.go:44] Found k8s binaries, skipping transfer
I0923 10:21:16.911138 8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 10:21:16.919548 8275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0923 10:21:16.936910 8275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 10:21:16.955024 8275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0923 10:21:16.972399 8275 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0923 10:21:16.976061 8275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 10:21:16.986584 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:17.073435 8275 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 10:21:17.088710 8275 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618 for IP: 192.168.49.2
I0923 10:21:17.088780 8275 certs.go:194] generating shared ca certs ...
I0923 10:21:17.088810 8275 certs.go:226] acquiring lock for ca certs: {Name:mk65c867ec8f333e41d1cce69d234e86fc7ac1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:17.089009 8275 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key
I0923 10:21:17.353855 8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt ...
I0923 10:21:17.353889 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt: {Name:mk20e2832fd3e141701b8471b89bb04526400614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:17.354116 8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key ...
I0923 10:21:17.354132 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key: {Name:mkaef70a9f9e29ad452ad4a00856aea93875efe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:17.354226 8275 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key
I0923 10:21:18.611693 8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt ...
I0923 10:21:18.611767 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt: {Name:mk28403ad2f8201297ec9ab70e4e9be5e67739bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:18.612017 8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key ...
I0923 10:21:18.612055 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key: {Name:mkb3eee3f7c5f6e127c953ada21e2c55ff322612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:18.612176 8275 certs.go:256] generating profile certs ...
I0923 10:21:18.612262 8275 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key
I0923 10:21:18.612309 8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt with IP's: []
I0923 10:21:18.915553 8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt ...
I0923 10:21:18.915631 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.crt: {Name:mkda59fae36ef039237e0aef270394146815ca53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:18.915846 8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key ...
I0923 10:21:18.915880 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/client.key: {Name:mk6490ee68106ed62c0f414cc380568ec2388aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:18.915997 8275 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9
I0923 10:21:18.916038 8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0923 10:21:19.267671 8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 ...
I0923 10:21:19.267703 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9: {Name:mk5d1194dcb297b383390fb12f2169d0bb2be05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:19.267907 8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9 ...
I0923 10:21:19.267923 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9: {Name:mkc0abb657ec219b0d73783a5b275bbe8b105742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:19.268009 8275 certs.go:381] copying /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt.995a30f9 -> /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt
I0923 10:21:19.268087 8275 certs.go:385] copying /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key.995a30f9 -> /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key
I0923 10:21:19.268140 8275 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key
I0923 10:21:19.268160 8275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt with IP's: []
I0923 10:21:19.858385 8275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt ...
I0923 10:21:19.858423 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt: {Name:mk78ce6447041726e1168434088a182b3dcc5c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:19.858630 8275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key ...
I0923 10:21:19.858644 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key: {Name:mk93e48fbce21cbcb230e2f97022950562a913c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:19.858869 8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca-key.pem (1675 bytes)
I0923 10:21:19.858908 8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/ca.pem (1078 bytes)
I0923 10:21:19.858939 8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/cert.pem (1123 bytes)
I0923 10:21:19.859004 8275 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-2206/.minikube/certs/key.pem (1675 bytes)
I0923 10:21:19.859614 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 10:21:19.885634 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 10:21:19.910294 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 10:21:19.935132 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0923 10:21:19.959050 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0923 10:21:19.983993 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0923 10:21:20.017649 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 10:21:20.047502 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/profiles/addons-193618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0923 10:21:20.072878 8275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-2206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 10:21:20.099702 8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 10:21:20.121570 8275 ssh_runner.go:195] Run: openssl version
I0923 10:21:20.128087 8275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 10:21:20.139042 8275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:20.143358 8275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:21 /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:20.143468 8275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:20.151191 8275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 10:21:20.162139 8275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 10:21:20.166795 8275 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 10:21:20.166897 8275 kubeadm.go:392] StartCluster: {Name:addons-193618 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-193618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 10:21:20.167055 8275 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0923 10:21:20.186430 8275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 10:21:20.195704 8275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 10:21:20.204685 8275 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0923 10:21:20.204786 8275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 10:21:20.214193 8275 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 10:21:20.214216 8275 kubeadm.go:157] found existing configuration files:
I0923 10:21:20.214269 8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 10:21:20.223232 8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 10:21:20.223325 8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 10:21:20.232039 8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 10:21:20.241341 8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 10:21:20.241407 8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 10:21:20.249992 8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 10:21:20.259088 8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 10:21:20.259175 8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 10:21:20.267787 8275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 10:21:20.277490 8275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 10:21:20.277570 8275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 10:21:20.286035 8275 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0923 10:21:20.338218 8275 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 10:21:20.338548 8275 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 10:21:20.360314 8275 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0923 10:21:20.360391 8275 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0923 10:21:20.360432 8275 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0923 10:21:20.360486 8275 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0923 10:21:20.360538 8275 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0923 10:21:20.360590 8275 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0923 10:21:20.360642 8275 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0923 10:21:20.360694 8275 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0923 10:21:20.360746 8275 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0923 10:21:20.360798 8275 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0923 10:21:20.360849 8275 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0923 10:21:20.360899 8275 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0923 10:21:20.420853 8275 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 10:21:20.421069 8275 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 10:21:20.421203 8275 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 10:21:20.433346 8275 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 10:21:20.435868 8275 out.go:235] - Generating certificates and keys ...
I0923 10:21:20.435983 8275 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 10:21:20.436061 8275 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 10:21:20.946154 8275 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 10:21:21.450257 8275 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 10:21:21.973955 8275 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 10:21:22.345911 8275 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 10:21:22.657416 8275 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 10:21:22.657637 8275 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-193618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 10:21:23.066601 8275 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 10:21:23.066948 8275 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-193618 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 10:21:23.340197 8275 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 10:21:23.885150 8275 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 10:21:24.134994 8275 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 10:21:24.135315 8275 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 10:21:24.277655 8275 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 10:21:24.725288 8275 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 10:21:25.080394 8275 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 10:21:25.338873 8275 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 10:21:25.935738 8275 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 10:21:25.936462 8275 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 10:21:25.939474 8275 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 10:21:25.941700 8275 out.go:235] - Booting up control plane ...
I0923 10:21:25.941802 8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 10:21:25.941877 8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 10:21:25.942551 8275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 10:21:25.953838 8275 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 10:21:25.960423 8275 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 10:21:25.960693 8275 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 10:21:26.066935 8275 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 10:21:26.067054 8275 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 10:21:27.067917 8275 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001091799s
I0923 10:21:27.068006 8275 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 10:21:33.070418 8275 kubeadm.go:310] [api-check] The API server is healthy after 6.002383818s
I0923 10:21:33.094317 8275 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 10:21:33.110784 8275 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 10:21:33.135663 8275 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 10:21:33.135861 8275 kubeadm.go:310] [mark-control-plane] Marking the node addons-193618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 10:21:33.146632 8275 kubeadm.go:310] [bootstrap-token] Using token: y8cva3.7obprnrgdellylf0
I0923 10:21:33.148898 8275 out.go:235] - Configuring RBAC rules ...
I0923 10:21:33.149044 8275 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 10:21:33.153445 8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 10:21:33.161340 8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 10:21:33.165695 8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 10:21:33.169722 8275 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 10:21:33.175677 8275 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 10:21:33.478711 8275 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 10:21:33.943731 8275 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 10:21:34.478802 8275 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 10:21:34.479949 8275 kubeadm.go:310]
I0923 10:21:34.480020 8275 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 10:21:34.480026 8275 kubeadm.go:310]
I0923 10:21:34.480102 8275 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 10:21:34.480106 8275 kubeadm.go:310]
I0923 10:21:34.480131 8275 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 10:21:34.480190 8275 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 10:21:34.480240 8275 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 10:21:34.480245 8275 kubeadm.go:310]
I0923 10:21:34.480298 8275 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 10:21:34.480303 8275 kubeadm.go:310]
I0923 10:21:34.480351 8275 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 10:21:34.480355 8275 kubeadm.go:310]
I0923 10:21:34.480420 8275 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 10:21:34.480496 8275 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 10:21:34.480563 8275 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 10:21:34.480568 8275 kubeadm.go:310]
I0923 10:21:34.480650 8275 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 10:21:34.480726 8275 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 10:21:34.480731 8275 kubeadm.go:310]
I0923 10:21:34.480813 8275 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y8cva3.7obprnrgdellylf0 \
I0923 10:21:34.480914 8275 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e43ba79877030aa66abb7e0cea888323e4e60db42c6d4031199b0da3893be839 \
I0923 10:21:34.480935 8275 kubeadm.go:310] --control-plane
I0923 10:21:34.480964 8275 kubeadm.go:310]
I0923 10:21:34.481049 8275 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 10:21:34.481054 8275 kubeadm.go:310]
I0923 10:21:34.481134 8275 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y8cva3.7obprnrgdellylf0 \
I0923 10:21:34.481234 8275 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e43ba79877030aa66abb7e0cea888323e4e60db42c6d4031199b0da3893be839
I0923 10:21:34.484886 8275 kubeadm.go:310] W0923 10:21:20.334263 1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 10:21:34.485208 8275 kubeadm.go:310] W0923 10:21:20.335589 1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 10:21:34.485426 8275 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0923 10:21:34.485534 8275 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 10:21:34.485553 8275 cni.go:84] Creating CNI manager for ""
I0923 10:21:34.485569 8275 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:21:34.487817 8275 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0923 10:21:34.489695 8275 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0923 10:21:34.498372 8275 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0923 10:21:34.518833 8275 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 10:21:34.518992 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:34.519093 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-193618 minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-193618 minikube.k8s.io/primary=true
I0923 10:21:34.665967 8275 ops.go:34] apiserver oom_adj: -16
I0923 10:21:34.666099 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:35.166498 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:35.666217 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:36.166927 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:36.666498 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:37.166665 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:37.666207 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:38.167108 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:38.666154 8275 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:38.772209 8275 kubeadm.go:1113] duration metric: took 4.253271402s to wait for elevateKubeSystemPrivileges
I0923 10:21:38.772237 8275 kubeadm.go:394] duration metric: took 18.605345541s to StartCluster
I0923 10:21:38.772253 8275 settings.go:142] acquiring lock: {Name:mk4964809950bdfd828e78cd468eb635fb21d14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:38.772367 8275 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19689-2206/kubeconfig
I0923 10:21:38.772724 8275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-2206/kubeconfig: {Name:mkff2b2c053c0153995d92eef0e52da52f6d4736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:38.772895 8275 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 10:21:38.773045 8275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 10:21:38.773268 8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:21:38.773295 8275 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 10:21:38.773394 8275 addons.go:69] Setting yakd=true in profile "addons-193618"
I0923 10:21:38.773406 8275 addons.go:234] Setting addon yakd=true in "addons-193618"
I0923 10:21:38.773455 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.773917 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.774424 8275 addons.go:69] Setting inspektor-gadget=true in profile "addons-193618"
I0923 10:21:38.774445 8275 addons.go:234] Setting addon inspektor-gadget=true in "addons-193618"
I0923 10:21:38.774467 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.774535 8275 addons.go:69] Setting metrics-server=true in profile "addons-193618"
I0923 10:21:38.774555 8275 addons.go:234] Setting addon metrics-server=true in "addons-193618"
I0923 10:21:38.774579 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.774913 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.775211 8275 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-193618"
I0923 10:21:38.775233 8275 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-193618"
I0923 10:21:38.775276 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.775784 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.776261 8275 addons.go:69] Setting registry=true in profile "addons-193618"
I0923 10:21:38.776283 8275 addons.go:234] Setting addon registry=true in "addons-193618"
I0923 10:21:38.776306 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.776718 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.779457 8275 addons.go:69] Setting cloud-spanner=true in profile "addons-193618"
I0923 10:21:38.779511 8275 addons.go:234] Setting addon cloud-spanner=true in "addons-193618"
I0923 10:21:38.779568 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.780347 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.791916 8275 addons.go:69] Setting storage-provisioner=true in profile "addons-193618"
I0923 10:21:38.791949 8275 addons.go:234] Setting addon storage-provisioner=true in "addons-193618"
I0923 10:21:38.791984 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.792463 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.804452 8275 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-193618"
I0923 10:21:38.807256 8275 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-193618"
I0923 10:21:38.808725 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.809129 8275 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-193618"
I0923 10:21:38.809201 8275 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-193618"
I0923 10:21:38.809248 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.809905 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.816386 8275 addons.go:69] Setting volcano=true in profile "addons-193618"
I0923 10:21:38.816522 8275 addons.go:234] Setting addon volcano=true in "addons-193618"
I0923 10:21:38.816557 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.816620 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.816437 8275 addons.go:69] Setting default-storageclass=true in profile "addons-193618"
I0923 10:21:38.821155 8275 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-193618"
I0923 10:21:38.824331 8275 addons.go:69] Setting volumesnapshots=true in profile "addons-193618"
I0923 10:21:38.824381 8275 addons.go:234] Setting addon volumesnapshots=true in "addons-193618"
I0923 10:21:38.824416 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.825067 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.825462 8275 out.go:177] * Verifying Kubernetes components...
I0923 10:21:38.816444 8275 addons.go:69] Setting gcp-auth=true in profile "addons-193618"
I0923 10:21:38.825667 8275 mustload.go:65] Loading cluster: addons-193618
I0923 10:21:38.825826 8275 config.go:182] Loaded profile config "addons-193618": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:21:38.826045 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.816456 8275 addons.go:69] Setting ingress=true in profile "addons-193618"
I0923 10:21:38.849148 8275 addons.go:234] Setting addon ingress=true in "addons-193618"
I0923 10:21:38.849201 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.849657 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.816460 8275 addons.go:69] Setting ingress-dns=true in profile "addons-193618"
I0923 10:21:38.877240 8275 addons.go:234] Setting addon ingress-dns=true in "addons-193618"
I0923 10:21:38.877289 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:38.911664 8275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 10:21:38.924519 8275 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 10:21:38.924604 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.928179 8275 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 10:21:38.928201 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 10:21:38.928270 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:38.960794 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.964651 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:38.987565 8275 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 10:21:38.990068 8275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.990097 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 10:21:38.990166 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.009253 8275 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 10:21:39.013290 8275 out.go:177] - Using image docker.io/registry:2.8.3
I0923 10:21:39.013383 8275 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 10:21:39.013394 8275 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 10:21:39.013476 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.017305 8275 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 10:21:39.019294 8275 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 10:21:39.019337 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 10:21:39.019409 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.035386 8275 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 10:21:39.037862 8275 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 10:21:39.037888 8275 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 10:21:39.037958 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.060118 8275 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-193618"
I0923 10:21:39.060161 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:39.060579 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:39.116031 8275 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 10:21:39.116316 8275 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 10:21:39.120926 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:39.122731 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 10:21:39.122936 8275 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 10:21:39.122974 8275 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 10:21:39.123048 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.130775 8275 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 10:21:39.131136 8275 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 10:21:39.131150 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 10:21:39.131213 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.140934 8275 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 10:21:39.141361 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 10:21:39.141393 8275 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 10:21:39.141500 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.149160 8275 addons.go:234] Setting addon default-storageclass=true in "addons-193618"
I0923 10:21:39.149201 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:39.149612 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:39.167319 8275 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 10:21:39.170697 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 10:21:39.172719 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 10:21:39.174389 8275 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0923 10:21:39.177067 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 10:21:39.178028 8275 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0923 10:21:39.178048 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0923 10:21:39.178120 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.206937 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 10:21:39.228411 8275 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0923 10:21:39.246300 8275 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 10:21:39.249175 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.257183 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.268988 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.274588 8275 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0923 10:21:39.278461 8275 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 10:21:39.279444 8275 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 10:21:39.279499 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0923 10:21:39.279584 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.282996 8275 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0923 10:21:39.283468 8275 out.go:177] - Using image docker.io/busybox:stable
I0923 10:21:39.289802 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.290437 8275 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 10:21:39.292183 8275 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 10:21:39.292288 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 10:21:39.292301 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 10:21:39.292363 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.297176 8275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 10:21:39.297195 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 10:21:39.297255 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.303791 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.304532 8275 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0923 10:21:39.312771 8275 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 10:21:39.312845 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0923 10:21:39.312935 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.345768 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.360739 8275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 10:21:39.382430 8275 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:39.382451 8275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 10:21:39.382614 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:39.393487 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.397078 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.410512 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.427210 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.432291 8275 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 10:21:39.445046 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.453989 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.454856 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.474817 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:39.995523 8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 10:21:39.995597 8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 10:21:40.041523 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:40.303253 8275 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 10:21:40.303278 8275 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 10:21:40.376892 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0923 10:21:40.435838 8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 10:21:40.435864 8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 10:21:40.453238 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 10:21:40.477108 8275 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 10:21:40.477131 8275 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 10:21:40.504517 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 10:21:40.507150 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 10:21:40.507177 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 10:21:40.577860 8275 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 10:21:40.577887 8275 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 10:21:40.607156 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 10:21:40.722282 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 10:21:40.722309 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 10:21:40.803754 8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 10:21:40.803780 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 10:21:40.807195 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:40.811865 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 10:21:40.826413 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 10:21:40.867295 8275 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 10:21:40.867321 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 10:21:40.984712 8275 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 10:21:40.984739 8275 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 10:21:41.002724 8275 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 10:21:41.002759 8275 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 10:21:41.015897 8275 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 10:21:41.015939 8275 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 10:21:41.055043 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 10:21:41.055073 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 10:21:41.094448 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 10:21:41.139429 8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 10:21:41.139455 8275 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 10:21:41.173266 8275 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 10:21:41.173291 8275 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 10:21:41.230600 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 10:21:41.230628 8275 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 10:21:41.233131 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 10:21:41.233157 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 10:21:41.236484 8275 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 10:21:41.236512 8275 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 10:21:41.415084 8275 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 10:21:41.415110 8275 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 10:21:41.558974 8275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 10:21:41.559000 8275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 10:21:41.603977 8275 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:41.604004 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 10:21:41.608071 8275 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 10:21:41.608142 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 10:21:41.630693 8275 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 10:21:41.630777 8275 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 10:21:41.759527 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 10:21:41.850800 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:41.925284 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 10:21:41.936664 8275 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 10:21:41.936749 8275 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 10:21:41.967950 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 10:21:41.968030 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 10:21:42.027553 8275 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.666777731s)
I0923 10:21:42.027635 8275 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0923 10:21:42.028810 8275 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.596492853s)
I0923 10:21:42.030068 8275 node_ready.go:35] waiting up to 6m0s for node "addons-193618" to be "Ready" ...
I0923 10:21:42.033202 8275 node_ready.go:49] node "addons-193618" has status "Ready":"True"
I0923 10:21:42.033291 8275 node_ready.go:38] duration metric: took 3.135439ms for node "addons-193618" to be "Ready" ...
I0923 10:21:42.033318 8275 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 10:21:42.046124 8275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace to be "Ready" ...
I0923 10:21:42.324373 8275 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 10:21:42.324467 8275 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 10:21:42.442187 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 10:21:42.442273 8275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 10:21:42.538799 8275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-193618" context rescaled to 1 replicas
I0923 10:21:42.553010 8275 pod_ready.go:93] pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:42.553081 8275 pod_ready.go:82] duration metric: took 506.871995ms for pod "coredns-7c65d6cfc9-6jz2z" in "kube-system" namespace to be "Ready" ...
I0923 10:21:42.553107 8275 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace to be "Ready" ...
I0923 10:21:42.653045 8275 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 10:21:42.653116 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 10:21:42.800675 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 10:21:42.800748 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 10:21:42.986387 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 10:21:43.099733 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 10:21:43.099761 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 10:21:43.300639 8275 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 10:21:43.300669 8275 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 10:21:43.787337 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 10:21:44.560442 8275 pod_ready.go:93] pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:44.560522 8275 pod_ready.go:82] duration metric: took 2.007395385s for pod "coredns-7c65d6cfc9-lxqrw" in "kube-system" namespace to be "Ready" ...
I0923 10:21:44.560552 8275 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:44.576263 8275 pod_ready.go:93] pod "etcd-addons-193618" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:44.576285 8275 pod_ready.go:82] duration metric: took 15.71347ms for pod "etcd-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:44.576296 8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:44.923043 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.881435628s)
I0923 10:21:46.086178 8275 pod_ready.go:93] pod "kube-apiserver-addons-193618" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:46.086203 8275 pod_ready.go:82] duration metric: took 1.509899436s for pod "kube-apiserver-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.086216 8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.092834 8275 pod_ready.go:93] pod "kube-controller-manager-addons-193618" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:46.092860 8275 pod_ready.go:82] duration metric: took 6.636713ms for pod "kube-controller-manager-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.092874 8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9k229" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.106238 8275 pod_ready.go:93] pod "kube-proxy-9k229" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:46.106266 8275 pod_ready.go:82] duration metric: took 13.384572ms for pod "kube-proxy-9k229" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.106278 8275 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.174020 8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 10:21:46.174108 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:46.201210 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:46.435432 8275 pod_ready.go:93] pod "kube-scheduler-addons-193618" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:46.435460 8275 pod_ready.go:82] duration metric: took 329.172367ms for pod "kube-scheduler-addons-193618" in "kube-system" namespace to be "Ready" ...
I0923 10:21:46.435472 8275 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace to be "Ready" ...
I0923 10:21:47.255605 8275 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 10:21:47.277781 8275 addons.go:234] Setting addon gcp-auth=true in "addons-193618"
I0923 10:21:47.277889 8275 host.go:66] Checking if "addons-193618" exists ...
I0923 10:21:47.278428 8275 cli_runner.go:164] Run: docker container inspect addons-193618 --format={{.State.Status}}
I0923 10:21:47.311058 8275 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 10:21:47.311108 8275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-193618
I0923 10:21:47.338658 8275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-2206/.minikube/machines/addons-193618/id_rsa Username:docker}
I0923 10:21:48.455541 8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:49.823600 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.446670796s)
I0923 10:21:49.823632 8275 addons.go:475] Verifying addon ingress=true in "addons-193618"
I0923 10:21:49.826903 8275 out.go:177] * Verifying ingress addon...
I0923 10:21:49.829658 8275 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0923 10:21:49.836036 8275 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0923 10:21:49.836060 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:50.335213 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:50.468012 8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:50.873187 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:51.336548 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:51.861677 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:51.980069 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.526788305s)
I0923 10:21:51.980144 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.475605967s)
I0923 10:21:51.980353 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.373170776s)
I0923 10:21:51.980385 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.173170005s)
I0923 10:21:51.980473 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.168586301s)
I0923 10:21:51.980515 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.154080316s)
I0923 10:21:51.980551 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.886079662s)
I0923 10:21:51.980564 8275 addons.go:475] Verifying addon registry=true in "addons-193618"
I0923 10:21:51.980766 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.22114604s)
I0923 10:21:51.980788 8275 addons.go:475] Verifying addon metrics-server=true in "addons-193618"
I0923 10:21:51.980870 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.130044891s)
W0923 10:21:51.980893 8275 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 10:21:51.980914 8275 retry.go:31] will retry after 172.911112ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 10:21:51.980975 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.055619208s)
I0923 10:21:51.981291 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.994813747s)
I0923 10:21:51.983992 8275 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-193618 service yakd-dashboard -n yakd-dashboard
I0923 10:21:51.984096 8275 out.go:177] * Verifying registry addon...
I0923 10:21:51.986809 8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 10:21:52.009490 8275 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 10:21:52.009525 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W0923 10:21:52.045303 8275 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0923 10:21:52.154445 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:52.372282 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:52.490589 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:52.842144 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:52.853811 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.066433471s)
I0923 10:21:52.853892 8275 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-193618"
I0923 10:21:52.854123 8275 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.543043902s)
I0923 10:21:52.856072 8275 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 10:21:52.856179 8275 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 10:21:52.858463 8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:21:52.860530 8275 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 10:21:52.862701 8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 10:21:52.862763 8275 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 10:21:52.870189 8275 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:21:52.870212 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:52.942508 8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:52.982103 8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 10:21:52.982172 8275 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 10:21:52.990943 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:53.053476 8275 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 10:21:53.053551 8275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 10:21:53.114230 8275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 10:21:53.335144 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:53.368813 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:53.491404 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:53.835044 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:53.864128 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:53.991519 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:54.338401 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:54.436620 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:54.537074 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:54.545216 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.390722268s)
I0923 10:21:54.545297 8275 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.431043288s)
I0923 10:21:54.548176 8275 addons.go:475] Verifying addon gcp-auth=true in "addons-193618"
I0923 10:21:54.551692 8275 out.go:177] * Verifying gcp-auth addon...
I0923 10:21:54.554246 8275 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 10:21:54.557692 8275 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 10:21:54.833832 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:54.863325 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:54.990927 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:55.334876 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:55.363406 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:55.445277 8275 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:55.491381 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:55.834827 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:55.863275 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:55.991093 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:56.334171 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:56.435518 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:56.535175 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:56.834546 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:56.862971 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:56.990447 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:57.334319 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:57.363645 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:57.442300 8275 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:57.442322 8275 pod_ready.go:82] duration metric: took 11.006842129s for pod "nvidia-device-plugin-daemonset-5mdqb" in "kube-system" namespace to be "Ready" ...
I0923 10:21:57.442333 8275 pod_ready.go:39] duration metric: took 15.408987414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 10:21:57.442354 8275 api_server.go:52] waiting for apiserver process to appear ...
I0923 10:21:57.442420 8275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:57.458691 8275 api_server.go:72] duration metric: took 18.685768055s to wait for apiserver process to appear ...
I0923 10:21:57.458722 8275 api_server.go:88] waiting for apiserver healthz status ...
I0923 10:21:57.458742 8275 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0923 10:21:57.466464 8275 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0923 10:21:57.467668 8275 api_server.go:141] control plane version: v1.31.1
I0923 10:21:57.467692 8275 api_server.go:131] duration metric: took 8.96308ms to wait for apiserver health ...
I0923 10:21:57.467701 8275 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 10:21:57.477646 8275 system_pods.go:59] 17 kube-system pods found
I0923 10:21:57.477683 8275 system_pods.go:61] "coredns-7c65d6cfc9-lxqrw" [06a58e53-b760-4639-8a16-e33921af5734] Running
I0923 10:21:57.477695 8275 system_pods.go:61] "csi-hostpath-attacher-0" [40a6270a-ae77-4da3-8c74-f4fa0beb8093] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 10:21:57.477703 8275 system_pods.go:61] "csi-hostpath-resizer-0" [cd924f79-8c12-4c51-a6b2-f212b26f8511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 10:21:57.477712 8275 system_pods.go:61] "csi-hostpathplugin-5fdgw" [2794263a-9a60-4faf-8479-39c29b19318e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 10:21:57.477717 8275 system_pods.go:61] "etcd-addons-193618" [dca22102-d2ff-44af-8467-f5e43f3f285a] Running
I0923 10:21:57.477722 8275 system_pods.go:61] "kube-apiserver-addons-193618" [349caafb-07cf-4118-a7d9-9e176b0b0117] Running
I0923 10:21:57.477733 8275 system_pods.go:61] "kube-controller-manager-addons-193618" [6579d226-8860-47a6-b281-27b083a0eb8c] Running
I0923 10:21:57.477740 8275 system_pods.go:61] "kube-ingress-dns-minikube" [b87d564f-0e35-4054-9131-fba8d4523e89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 10:21:57.477747 8275 system_pods.go:61] "kube-proxy-9k229" [d5bee045-aa08-4eb9-bd38-15d84b988e75] Running
I0923 10:21:57.477752 8275 system_pods.go:61] "kube-scheduler-addons-193618" [2328b96b-0fec-4b16-b3a7-8541b1afa7e9] Running
I0923 10:21:57.477757 8275 system_pods.go:61] "metrics-server-84c5f94fbc-2sqlh" [8f5addb1-a8f0-4eab-a10b-b9726aa3efae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 10:21:57.477762 8275 system_pods.go:61] "nvidia-device-plugin-daemonset-5mdqb" [aefa91be-a5e1-48f3-a1b2-2499c4661d89] Running
I0923 10:21:57.477772 8275 system_pods.go:61] "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 10:21:57.477778 8275 system_pods.go:61] "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 10:21:57.477787 8275 system_pods.go:61] "snapshot-controller-56fcc65765-ffcj9" [1ce169da-19ba-425a-8cd5-6d3f822f219a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:57.477797 8275 system_pods.go:61] "snapshot-controller-56fcc65765-zb4t6" [159d70bf-1cd6-47ab-9755-77249cf27379] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:57.477807 8275 system_pods.go:61] "storage-provisioner" [77de7aea-ddda-4eeb-8a47-3e6564a3f597] Running
I0923 10:21:57.477814 8275 system_pods.go:74] duration metric: took 10.106849ms to wait for pod list to return data ...
I0923 10:21:57.477824 8275 default_sa.go:34] waiting for default service account to be created ...
I0923 10:21:57.481384 8275 default_sa.go:45] found service account: "default"
I0923 10:21:57.481409 8275 default_sa.go:55] duration metric: took 3.576285ms for default service account to be created ...
I0923 10:21:57.481419 8275 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 10:21:57.490827 8275 system_pods.go:86] 17 kube-system pods found
I0923 10:21:57.490864 8275 system_pods.go:89] "coredns-7c65d6cfc9-lxqrw" [06a58e53-b760-4639-8a16-e33921af5734] Running
I0923 10:21:57.490876 8275 system_pods.go:89] "csi-hostpath-attacher-0" [40a6270a-ae77-4da3-8c74-f4fa0beb8093] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 10:21:57.490884 8275 system_pods.go:89] "csi-hostpath-resizer-0" [cd924f79-8c12-4c51-a6b2-f212b26f8511] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 10:21:57.490891 8275 system_pods.go:89] "csi-hostpathplugin-5fdgw" [2794263a-9a60-4faf-8479-39c29b19318e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 10:21:57.490899 8275 system_pods.go:89] "etcd-addons-193618" [dca22102-d2ff-44af-8467-f5e43f3f285a] Running
I0923 10:21:57.490906 8275 system_pods.go:89] "kube-apiserver-addons-193618" [349caafb-07cf-4118-a7d9-9e176b0b0117] Running
I0923 10:21:57.490916 8275 system_pods.go:89] "kube-controller-manager-addons-193618" [6579d226-8860-47a6-b281-27b083a0eb8c] Running
I0923 10:21:57.490923 8275 system_pods.go:89] "kube-ingress-dns-minikube" [b87d564f-0e35-4054-9131-fba8d4523e89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 10:21:57.490933 8275 system_pods.go:89] "kube-proxy-9k229" [d5bee045-aa08-4eb9-bd38-15d84b988e75] Running
I0923 10:21:57.490938 8275 system_pods.go:89] "kube-scheduler-addons-193618" [2328b96b-0fec-4b16-b3a7-8541b1afa7e9] Running
I0923 10:21:57.490943 8275 system_pods.go:89] "metrics-server-84c5f94fbc-2sqlh" [8f5addb1-a8f0-4eab-a10b-b9726aa3efae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 10:21:57.490948 8275 system_pods.go:89] "nvidia-device-plugin-daemonset-5mdqb" [aefa91be-a5e1-48f3-a1b2-2499c4661d89] Running
I0923 10:21:57.490957 8275 system_pods.go:89] "registry-66c9cd494c-k2qlh" [638f01b9-2726-41db-a1a9-43e4bf4d8443] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 10:21:57.490963 8275 system_pods.go:89] "registry-proxy-bfrml" [cab49b7f-8d32-4017-9de8-d55b0ce0e2f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 10:21:57.490972 8275 system_pods.go:89] "snapshot-controller-56fcc65765-ffcj9" [1ce169da-19ba-425a-8cd5-6d3f822f219a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:57.490980 8275 system_pods.go:89] "snapshot-controller-56fcc65765-zb4t6" [159d70bf-1cd6-47ab-9755-77249cf27379] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:57.490987 8275 system_pods.go:89] "storage-provisioner" [77de7aea-ddda-4eeb-8a47-3e6564a3f597] Running
I0923 10:21:57.490994 8275 system_pods.go:126] duration metric: took 9.569677ms to wait for k8s-apps to be running ...
I0923 10:21:57.491001 8275 system_svc.go:44] waiting for kubelet service to be running ....
I0923 10:21:57.491059 8275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0923 10:21:57.494399 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:57.513723 8275 system_svc.go:56] duration metric: took 22.712726ms WaitForService to wait for kubelet
I0923 10:21:57.513749 8275 kubeadm.go:582] duration metric: took 18.740831155s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 10:21:57.513768 8275 node_conditions.go:102] verifying NodePressure condition ...
I0923 10:21:57.517393 8275 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0923 10:21:57.517426 8275 node_conditions.go:123] node cpu capacity is 2
I0923 10:21:57.517437 8275 node_conditions.go:105] duration metric: took 3.663097ms to run NodePressure ...
I0923 10:21:57.517450 8275 start.go:241] waiting for startup goroutines ...
I0923 10:21:57.834553 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:57.863925 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:57.992141 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:58.334838 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:58.364533 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:58.491245 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:58.841651 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:58.864020 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:58.990392 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:59.334130 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:59.363846 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:59.490424 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:59.833900 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:21:59.864019 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:59.991073 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:00.336986 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:00.377389 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:00.493259 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:00.834839 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:00.863239 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:00.991160 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:01.334941 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:01.363854 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:01.491877 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:01.833992 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:01.863417 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:01.991026 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:02.334720 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:02.363130 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:02.490820 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:02.834139 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:02.863684 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:02.991305 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:03.335320 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:03.364726 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:03.491384 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:03.834197 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:03.863806 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:03.990992 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:04.334528 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:04.363449 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:04.491507 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:04.834443 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:04.863673 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:04.991434 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:05.335032 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:05.364054 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:05.491174 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:05.838134 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:05.866791 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:05.991074 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:06.334650 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:06.363260 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:06.491087 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:06.834196 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:06.863856 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:06.990657 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:07.334212 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:07.363923 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:07.490791 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:07.834030 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:07.863635 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:07.990208 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:08.340627 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:08.363909 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:08.495162 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:08.844799 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:08.865886 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:08.991259 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:09.334913 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:09.364489 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:09.492179 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:09.834287 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:09.863836 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:09.990691 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:10.335909 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:10.363965 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:10.491321 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:10.834354 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:10.863870 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:10.990665 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:11.338139 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:11.437651 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:11.492080 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:11.834179 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:11.863648 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:11.991165 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:12.334665 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:12.363847 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:12.490727 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:12.834731 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:12.863446 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:12.991112 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:13.335716 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:13.365403 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:13.490848 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:13.834745 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:13.868662 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:13.991175 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:14.335839 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:14.364116 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:14.491378 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:14.834810 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:14.863732 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:14.991242 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:15.334686 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:15.363578 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:15.503755 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:15.834893 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:15.863520 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:15.991778 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:22:16.334003 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:16.363177 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:16.491536 8275 kapi.go:107] duration metric: took 24.504724609s to wait for kubernetes.io/minikube-addons=registry ...
I0923 10:22:16.835712 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:16.863604 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:17.336577 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:17.363615 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:17.834154 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:17.864226 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:18.333728 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:18.368294 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:18.834962 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:18.864067 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:19.334513 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:19.362692 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:19.834047 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:19.863362 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:20.335210 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:20.365751 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:20.834842 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:20.863945 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:21.335280 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:21.364095 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:21.834421 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:21.864506 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:22.336304 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:22.437057 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:22.845295 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:22.946588 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:23.334469 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:23.364818 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:23.834151 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:23.863907 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:24.335640 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:24.365018 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:24.835637 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:24.863922 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:25.334616 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:25.363160 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:25.836341 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:25.864477 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:26.335671 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:26.362842 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:26.837804 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:26.888184 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:27.334593 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:27.363338 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:27.834633 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:27.863527 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:28.336049 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:28.436683 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:28.835040 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:28.863410 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:29.334455 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:29.362853 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:29.834556 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:29.863079 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:30.334539 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:30.362888 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:30.835423 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:30.864913 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:31.350571 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:31.364708 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:31.836234 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:31.863686 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:32.335310 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:32.363521 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:32.901268 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:32.902348 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:33.334932 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:33.363422 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:33.834017 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:33.863942 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:34.334978 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:34.363694 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:34.834469 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:34.863921 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:35.337805 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:35.370555 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:35.835354 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:35.864616 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:36.334497 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:36.362894 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:36.835348 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:36.864205 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:37.334513 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:37.362996 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:37.834324 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:37.863134 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:38.334826 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:38.364004 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:38.837595 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:38.936058 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:39.335041 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:39.366877 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:39.834621 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:39.868143 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:40.338537 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:40.364425 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:40.834969 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:40.863385 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:41.335679 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:41.363302 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:41.834187 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:41.863425 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:42.335319 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:42.364788 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:42.834405 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:42.866358 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:43.334954 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:43.363683 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:43.835249 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:43.863778 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:44.334768 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:44.364200 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:44.834391 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:44.864242 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:45.337055 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:45.438869 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:45.834059 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:45.863477 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:46.335556 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:46.363519 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:46.834391 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:46.864498 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:47.334127 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:47.363428 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:47.834420 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:47.864198 8275 kapi.go:107] duration metric: took 55.005733241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 10:22:48.334372 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:48.833792 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:49.334405 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:49.834848 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:50.334437 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:50.833805 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:51.334622 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:51.834623 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:52.333772 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:52.833618 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:53.334182 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:53.834466 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:54.334429 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:54.834453 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:55.334925 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:55.837289 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:56.334475 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:56.835734 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:57.335956 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:57.833813 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:58.335003 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:58.835001 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:59.334693 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:22:59.848414 8275 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 10:23:00.339532 8275 kapi.go:107] duration metric: took 1m10.509870737s to wait for app.kubernetes.io/name=ingress-nginx ...
I0923 10:23:17.583081 8275 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 10:23:17.583110 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:18.058222 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:18.565684 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:19.057930 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:19.565170 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:20.058144 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:20.562600 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:21.058184 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:21.563485 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:22.058105 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:22.562984 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:23.058117 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:23.557545 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:24.058913 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:24.559000 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:25.061290 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:25.564194 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:26.057446 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:26.562938 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:27.058032 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:27.568467 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:28.057549 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:28.562879 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:29.061221 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:29.564543 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:30.062723 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:30.563293 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:31.057805 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:31.557883 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:32.058402 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:32.557892 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:33.059684 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:33.558123 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:34.058020 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:34.563920 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:35.058645 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:35.559302 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:36.058107 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:36.557961 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:37.058138 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:37.559275 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:38.058159 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:38.557869 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:39.058009 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:39.558419 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:40.059827 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:40.564811 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:41.058064 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:41.558013 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:42.058393 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:42.562593 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:43.058051 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:43.563534 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:44.059198 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:44.558109 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:45.059151 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:45.559779 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:46.057573 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:46.563596 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:47.057981 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:47.562521 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:48.058621 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:48.557311 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:49.057989 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:49.557668 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:50.058407 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:50.557639 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:51.059071 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:51.564119 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:52.058134 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:52.558272 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:53.058968 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:53.562937 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:54.058067 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:54.557496 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:55.058333 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:55.563113 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:56.057990 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:56.563024 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:57.058615 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:57.558372 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:58.058496 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:58.563810 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:59.057536 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:59.564297 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:00.062407 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:00.567010 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:01.057408 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:01.564395 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:02.058561 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:02.563388 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:03.059716 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:03.563073 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:04.057949 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:04.557733 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:05.057589 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:05.557743 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:06.057981 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:06.558610 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:07.058553 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:07.559377 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:08.058605 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:08.559822 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:09.058666 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:09.563120 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:10.058100 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:10.557228 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:11.057443 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:11.563693 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:12.058228 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:12.558041 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:13.058390 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:13.563919 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:14.058972 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:14.563907 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:15.058902 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:15.558138 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:16.058275 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:16.558092 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:17.057233 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:17.563521 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:18.059019 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:18.563370 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:19.058236 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:19.562953 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:20.057891 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:20.563590 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:21.058533 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:21.564430 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:22.058102 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:22.557199 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:23.057998 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:23.563140 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:24.057625 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:24.568685 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:25.059507 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:25.572494 8275 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:24:26.068931 8275 kapi.go:107] duration metric: took 2m31.51468247s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0923 10:24:26.070789 8275 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-193618 cluster.
I0923 10:24:26.073140 8275 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0923 10:24:26.075034 8275 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0923 10:24:26.077542 8275 out.go:177] * Enabled addons: storage-provisioner, volcano, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0923 10:24:26.080471 8275 addons.go:510] duration metric: took 2m47.307162101s for enable addons: enabled=[storage-provisioner volcano nvidia-device-plugin cloud-spanner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0923 10:24:26.080549 8275 start.go:246] waiting for cluster config update ...
I0923 10:24:26.080573 8275 start.go:255] writing updated cluster config ...
I0923 10:24:26.080911 8275 ssh_runner.go:195] Run: rm -f paused
I0923 10:24:26.418990 8275 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 10:24:26.421655 8275 out.go:177] * Done! kubectl is now configured to use "addons-193618" cluster and "default" namespace by default
==> Docker <==
Sep 23 10:34:05 addons-193618 dockerd[1285]: time="2024-09-23T10:34:05.060074090Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ab650e308a53ab21 traceID=3d1911f9960c1c5dff7b44013d038ccc
Sep 23 10:34:07 addons-193618 cri-dockerd[1543]: time="2024-09-23T10:34:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7dd0fb4424218647ad76450ce819b6e2768a818c793b254dc16822780e06b36e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 23 10:34:08 addons-193618 cri-dockerd[1543]: time="2024-09-23T10:34:08Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
Sep 23 10:34:14 addons-193618 dockerd[1285]: time="2024-09-23T10:34:14.526966721Z" level=info msg="ignoring event" container=69c62e6d82c12b55502fe8e2bea6183dffccde28b0e170b8e9e88665def7c202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:14 addons-193618 dockerd[1285]: time="2024-09-23T10:34:14.656827927Z" level=info msg="ignoring event" container=7dd0fb4424218647ad76450ce819b6e2768a818c793b254dc16822780e06b36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.265928045Z" level=info msg="ignoring event" container=cebc37d9ea93cb3396ae3ba265d533dbfd0b52b14767f8e01d4bac1ec9e537a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.399923896Z" level=info msg="ignoring event" container=0519e6ec8caa54d21c569d413eca7ac4d57cfdfe62536572741ad0149345f976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.424305726Z" level=info msg="ignoring event" container=c4231b0bcaa4db91fa345c05c451f895ab19d5f9146b0b1f1d29bfd82bcddc15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.434854034Z" level=info msg="ignoring event" container=cbc96faceed465711b8023b5fd080450313a1d5d7aace97764e62a30e2ea541d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.436166569Z" level=info msg="ignoring event" container=343f0c241f9138e33dc83a03292ff62c976daff2712140cd37dc4232f4c80196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.477921678Z" level=info msg="ignoring event" container=1caacf97352455224488da816121776194aaa520ad916072f2e05673cd463516 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.484313343Z" level=info msg="ignoring event" container=a26071c9bbe3ad72c95b07dc49aeca3c0cda93eb646a06f5f16ce4be7307bfad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.497712568Z" level=info msg="ignoring event" container=b04a490940e64ba9dd87f08a69da0d8ed51bda9476fbb79317729291f9096636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.516305188Z" level=info msg="ignoring event" container=ef4900f78f8a36ebd1e8bd8736a6416507e46f924f47b66d9a8fe14b78796df5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.755244405Z" level=info msg="ignoring event" container=05109b361043d32d8555a2a8ba7984423435358b7ae2b1adf4ba63b43661b51c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:16 addons-193618 dockerd[1285]: time="2024-09-23T10:34:16.794759782Z" level=info msg="ignoring event" container=28ec87b625c6cf9e40a79bf1ab55a2d96d87d0bff2daa6669ff97426eb00ffb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:22 addons-193618 dockerd[1285]: time="2024-09-23T10:34:22.960893490Z" level=info msg="ignoring event" container=23efb19617acfb2fb348381ca3647c92ea83c04e3839b4dfd1d484bc876b664b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:22 addons-193618 dockerd[1285]: time="2024-09-23T10:34:22.969881352Z" level=info msg="ignoring event" container=4b8f09461332556f05ddeb809328a18ba0fd447f750ca6375e4b0276185bfb95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:23 addons-193618 dockerd[1285]: time="2024-09-23T10:34:23.151013614Z" level=info msg="ignoring event" container=305ef0589f8af10ff46455e48b58a33bf188a0fd4bda30186ebd1b9fb7ac371a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:23 addons-193618 dockerd[1285]: time="2024-09-23T10:34:23.202896666Z" level=info msg="ignoring event" container=42c778a66a5dc80407c19c0cdf1788f1ac4de163348e7cb24a13439f7d12c4cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:24 addons-193618 dockerd[1285]: time="2024-09-23T10:34:24.735257760Z" level=info msg="ignoring event" container=19415a5860710b57c6d509b4ccc4d94ea14b58e29fa8ff5d2a40962973034565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.439264957Z" level=info msg="ignoring event" container=940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.523752867Z" level=info msg="ignoring event" container=e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.661073131Z" level=info msg="ignoring event" container=5f240d7d405914a9d44232c66324b2cdeacebc85f4261a5b47c4436352b85186 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:34:25 addons-193618 dockerd[1285]: time="2024-09-23T10:34:25.790268065Z" level=info msg="ignoring event" container=4449e0de1aefc1e570b7d22b118eadff0325adfbdacea38f3299a5b14ebb453e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
35c68cbc9026a ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 45 seconds ago Exited gadget 7 6b44463987b1e gadget-st667
ae8beaa6003c8 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 33e079e6764e7 gcp-auth-89d5ffd79-4lsd5
07a948d5bfd4a registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 343eb29074ed2 ingress-nginx-controller-bc57996ff-xl2d2
723dd49ff0a10 420193b27261a 11 minutes ago Exited patch 1 f7e30b019b64c ingress-nginx-admission-patch-5cd9z
75d000c997318 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 e00f8cc3dae69 ingress-nginx-admission-create-nk96l
c6347d14adc8b rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 4d464912ccc11 local-path-provisioner-86d989889c-625dr
76fa155a737d5 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 400cab5664026 metrics-server-84c5f94fbc-2sqlh
66e82c3226c55 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 2e6f32a61851b yakd-dashboard-67d98fc6b-wlb7t
8b9efef765240 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 f439c7debd80f cloud-spanner-emulator-5b584cc74-tq68x
3d622714d8738 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 87338c8fee98d kube-ingress-dns-minikube
721bb42a705a4 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 9bdbb56b3529b nvidia-device-plugin-daemonset-5mdqb
ae89bac99e2b0 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 bc31e051d25d7 storage-provisioner
eaf065857e059 2f6c962e7b831 12 minutes ago Running coredns 0 0cfe8aaacd67e coredns-7c65d6cfc9-lxqrw
9062e83d9da75 24a140c548c07 12 minutes ago Running kube-proxy 0 7883781f0bcb9 kube-proxy-9k229
3c4822743ab5f 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 52969f4ea4f33 kube-scheduler-addons-193618
0320e83e104fe 27e3830e14027 12 minutes ago Running etcd 0 cae123d2fb50c etcd-addons-193618
49428be737406 d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 6e8403b798aad kube-apiserver-addons-193618
b899dba6bcc74 279f381cb3736 12 minutes ago Running kube-controller-manager 0 b312571bd783d kube-controller-manager-addons-193618
==> controller_ingress [07a948d5bfd4] <==
NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
I0923 10:22:59.907541 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0923 10:23:00.882876 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0923 10:23:00.901063 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0923 10:23:00.911135 7 nginx.go:271] "Starting NGINX Ingress controller"
I0923 10:23:00.932703 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"209a71a3-9bd8-4a8c-b609-07e1111d6bf2", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0923 10:23:00.933526 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"265e10d2-baa3-4c83-8ab2-563c72829e4a", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0923 10:23:00.933555 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"dd8afef1-62f0-4462-a1df-f18a898e5259", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0923 10:23:02.113272 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0923 10:23:02.113275 7 nginx.go:317] "Starting NGINX process"
I0923 10:23:02.113904 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0923 10:23:02.114185 7 controller.go:193] "Configuration changes detected, backend reload required"
I0923 10:23:02.133703 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0923 10:23:02.133921 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xl2d2"
I0923 10:23:02.154541 7 controller.go:213] "Backend successfully reloaded"
I0923 10:23:02.154792 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0923 10:23:02.154914 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xl2d2", UID:"eb6dd0a7-0a66-4583-84e0-3164dc71b70a", APIVersion:"v1", ResourceVersion:"1276", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0923 10:23:02.161282 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xl2d2" node="addons-193618"
==> coredns [eaf065857e05] <==
[INFO] 10.244.0.5:47700 - 13911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038581s
[INFO] 10.244.0.5:56106 - 22157 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007021973s
[INFO] 10.244.0.5:56106 - 45450 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007899381s
[INFO] 10.244.0.5:47955 - 45600 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049928s
[INFO] 10.244.0.5:47955 - 17443 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00004311s
[INFO] 10.244.0.5:47089 - 35257 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000637315s
[INFO] 10.244.0.5:47089 - 58549 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045695s
[INFO] 10.244.0.5:39582 - 34582 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046409s
[INFO] 10.244.0.5:39582 - 11547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044013s
[INFO] 10.244.0.5:52675 - 4008 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046606s
[INFO] 10.244.0.5:52675 - 61366 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049215s
[INFO] 10.244.0.5:45553 - 1439 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001501718s
[INFO] 10.244.0.5:45553 - 20354 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001559885s
[INFO] 10.244.0.5:51971 - 59058 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048091s
[INFO] 10.244.0.5:51971 - 19888 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040772s
[INFO] 10.244.0.25:37390 - 1808 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000256494s
[INFO] 10.244.0.25:40077 - 1552 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173186s
[INFO] 10.244.0.25:36127 - 16327 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088254s
[INFO] 10.244.0.25:33763 - 50843 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069276s
[INFO] 10.244.0.25:36605 - 45847 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000775133s
[INFO] 10.244.0.25:54294 - 56081 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128698s
[INFO] 10.244.0.25:50045 - 23584 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002552773s
[INFO] 10.244.0.25:60343 - 48959 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004199974s
[INFO] 10.244.0.25:55987 - 48606 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002017886s
[INFO] 10.244.0.25:43811 - 45347 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001881944s
==> describe nodes <==
Name: addons-193618
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-193618
kubernetes.io/os=linux
minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
minikube.k8s.io/name=addons-193618
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-193618
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 10:21:31 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-193618
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 10:34:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 10:33:39 +0000 Mon, 23 Sep 2024 10:21:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 10:33:39 +0000 Mon, 23 Sep 2024 10:21:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 10:33:39 +0000 Mon, 23 Sep 2024 10:21:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 10:33:39 +0000 Mon, 23 Sep 2024 10:21:31 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-193618
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 7d998278c96147d49f1ab1e139e6ff1f
System UUID: fa2e08c8-d57f-4dbe-a2dc-d866b9da2af3
Boot ID: a368a3b9-64b6-4915-adf4-926cc803503e
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.3.0
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (17 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default cloud-spanner-emulator-5b584cc74-tq68x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-st667 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-4lsd5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-xl2d2 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-lxqrw 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-193618 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-193618 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-193618 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-9k229 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-193618 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-2sqlh 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-5mdqb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-625dr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-wlb7t 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 588Mi (7%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal NodeAllocatableEnforced 13m kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 13m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeHasSufficientMemory 13m (x8 over 13m) kubelet Node addons-193618 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m (x7 over 13m) kubelet Node addons-193618 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m (x7 over 13m) kubelet Node addons-193618 status is now: NodeHasSufficientPID
Normal Starting 13m kubelet Starting kubelet.
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-193618 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-193618 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-193618 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-193618 event: Registered Node addons-193618 in Controller
==> dmesg <==
[Sep23 10:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015777] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.503278] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.769655] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.076197] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [0320e83e104f] <==
{"level":"info","ts":"2024-09-23T10:21:28.165106Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-09-23T10:21:28.165329Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-09-23T10:21:28.521000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-23T10:21:28.521248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-23T10:21:28.521419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-23T10:21:28.521534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-23T10:21:28.521722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T10:21:28.521856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-23T10:21:28.521960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T10:21:28.523714Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-193618 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-23T10:21:28.523901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T10:21:28.524320Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T10:21:28.525600Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T10:21:28.529980Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-23T10:21:28.535850Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T10:21:28.573325Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T10:21:28.573489Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T10:21:28.535877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T10:21:28.535980Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-23T10:21:28.585278Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-23T10:21:28.586052Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T10:21:28.594104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-23T10:31:29.185095Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1886}
{"level":"info","ts":"2024-09-23T10:31:29.233313Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1886,"took":"47.406345ms","hash":4170233479,"current-db-size-bytes":8404992,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4845568,"current-db-size-in-use":"4.8 MB"}
{"level":"info","ts":"2024-09-23T10:31:29.233374Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4170233479,"revision":1886,"compact-revision":-1}
==> gcp-auth [ae8beaa6003c] <==
2024/09/23 10:24:25 GCP Auth Webhook started!
2024/09/23 10:24:43 Ready to marshal response ...
2024/09/23 10:24:43 Ready to write response ...
2024/09/23 10:24:44 Ready to marshal response ...
2024/09/23 10:24:44 Ready to write response ...
2024/09/23 10:25:08 Ready to marshal response ...
2024/09/23 10:25:08 Ready to write response ...
2024/09/23 10:25:09 Ready to marshal response ...
2024/09/23 10:25:09 Ready to write response ...
2024/09/23 10:25:09 Ready to marshal response ...
2024/09/23 10:25:09 Ready to write response ...
2024/09/23 10:33:13 Ready to marshal response ...
2024/09/23 10:33:13 Ready to write response ...
2024/09/23 10:33:13 Ready to marshal response ...
2024/09/23 10:33:13 Ready to write response ...
2024/09/23 10:33:13 Ready to marshal response ...
2024/09/23 10:33:13 Ready to write response ...
2024/09/23 10:33:24 Ready to marshal response ...
2024/09/23 10:33:24 Ready to write response ...
2024/09/23 10:33:48 Ready to marshal response ...
2024/09/23 10:33:48 Ready to write response ...
2024/09/23 10:34:07 Ready to marshal response ...
2024/09/23 10:34:07 Ready to write response ...
==> kernel <==
10:34:26 up 16 min, 0 users, load average: 0.63, 0.66, 0.56
Linux addons-193618 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [49428be73740] <==
I0923 10:24:59.308636 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0923 10:24:59.328905 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 10:24:59.762198 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 10:24:59.798898 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 10:24:59.991659 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0923 10:25:00.157080 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0923 10:25:00.342637 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0923 10:25:00.446311 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0923 10:25:00.494214 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0923 10:25:00.621112 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0923 10:25:01.035216 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0923 10:25:01.493408 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0923 10:33:13.159718 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.142.64"}
I0923 10:33:55.441717 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0923 10:34:22.627220 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 10:34:22.627261 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 10:34:22.701174 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 10:34:22.701451 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 10:34:22.723308 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 10:34:22.725420 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 10:34:22.822030 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 10:34:22.822084 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0923 10:34:23.702043 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0923 10:34:23.822693 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0923 10:34:23.827246 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [b899dba6bcc7] <==
E0923 10:34:02.960399 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:05.219955 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:05.220066 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:06.270642 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:06.270692 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:13.920970 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:13.921016 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 10:34:16.167482 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
I0923 10:34:16.262485 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
I0923 10:34:16.543639 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-193618"
W0923 10:34:20.839685 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:20.839729 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 10:34:22.863195 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="12.734µs"
E0923 10:34:23.703914 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0923 10:34:23.824563 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0923 10:34:23.829133 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:24.584753 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:24.584803 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:24.629635 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:24.629683 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:34:25.066531 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:25.066582 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 10:34:25.349340 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.041µs"
W0923 10:34:26.694219 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:34:26.694265 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [9062e83d9da7] <==
I0923 10:21:40.158856 1 server_linux.go:66] "Using iptables proxy"
I0923 10:21:40.370985 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0923 10:21:40.371050 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 10:21:40.394670 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 10:21:40.394752 1 server_linux.go:169] "Using iptables Proxier"
I0923 10:21:40.410170 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 10:21:40.410501 1 server.go:483] "Version info" version="v1.31.1"
I0923 10:21:40.410516 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 10:21:40.412009 1 config.go:199] "Starting service config controller"
I0923 10:21:40.412036 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 10:21:40.412100 1 config.go:105] "Starting endpoint slice config controller"
I0923 10:21:40.412106 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 10:21:40.413453 1 config.go:328] "Starting node config controller"
I0923 10:21:40.413475 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 10:21:40.512993 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0923 10:21:40.513067 1 shared_informer.go:320] Caches are synced for service config
I0923 10:21:40.513673 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [3c4822743ab5] <==
E0923 10:21:31.525349 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.522341 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:31.525559 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0923 10:21:31.525708 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.335928 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0923 10:21:32.335968 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.420388 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0923 10:21:32.420494 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.472549 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:32.473322 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.473112 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0923 10:21:32.473674 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.538733 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0923 10:21:32.539020 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.591925 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 10:21:32.592190 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.636913 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 10:21:32.636985 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.676548 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0923 10:21:32.676594 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.682970 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:32.683025 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.683097 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:32.683115 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0923 10:21:33.105064 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.438872 2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159d70bf-1cd6-47ab-9755-77249cf27379-kube-api-access-w7cbm" (OuterVolumeSpecName: "kube-api-access-w7cbm") pod "159d70bf-1cd6-47ab-9755-77249cf27379" (UID: "159d70bf-1cd6-47ab-9755-77249cf27379"). InnerVolumeSpecName "kube-api-access-w7cbm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.536463 2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w7cbm\" (UniqueName: \"kubernetes.io/projected/159d70bf-1cd6-47ab-9755-77249cf27379-kube-api-access-w7cbm\") on node \"addons-193618\" DevicePath \"\""
Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.834625 2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159d70bf-1cd6-47ab-9755-77249cf27379" path="/var/lib/kubelet/pods/159d70bf-1cd6-47ab-9755-77249cf27379/volumes"
Sep 23 10:34:23 addons-193618 kubelet[2328]: I0923 10:34:23.835016 2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce169da-19ba-425a-8cd5-6d3f822f219a" path="/var/lib/kubelet/pods/1ce169da-19ba-425a-8cd5-6d3f822f219a/volumes"
Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946158 2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xh2\" (UniqueName: \"kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2\") pod \"e65092e7-6746-42c0-a92c-d40091668e67\" (UID: \"e65092e7-6746-42c0-a92c-d40091668e67\") "
Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946248 2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds\") pod \"e65092e7-6746-42c0-a92c-d40091668e67\" (UID: \"e65092e7-6746-42c0-a92c-d40091668e67\") "
Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.946445 2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e65092e7-6746-42c0-a92c-d40091668e67" (UID: "e65092e7-6746-42c0-a92c-d40091668e67"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 23 10:34:24 addons-193618 kubelet[2328]: I0923 10:34:24.949506 2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2" (OuterVolumeSpecName: "kube-api-access-z6xh2") pod "e65092e7-6746-42c0-a92c-d40091668e67" (UID: "e65092e7-6746-42c0-a92c-d40091668e67"). InnerVolumeSpecName "kube-api-access-z6xh2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.047855 2328 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e65092e7-6746-42c0-a92c-d40091668e67-gcp-creds\") on node \"addons-193618\" DevicePath \"\""
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.047899 2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z6xh2\" (UniqueName: \"kubernetes.io/projected/e65092e7-6746-42c0-a92c-d40091668e67-kube-api-access-z6xh2\") on node \"addons-193618\" DevicePath \"\""
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.859331 2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzstt\" (UniqueName: \"kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt\") pod \"638f01b9-2726-41db-a1a9-43e4bf4d8443\" (UID: \"638f01b9-2726-41db-a1a9-43e4bf4d8443\") "
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.862312 2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt" (OuterVolumeSpecName: "kube-api-access-zzstt") pod "638f01b9-2726-41db-a1a9-43e4bf4d8443" (UID: "638f01b9-2726-41db-a1a9-43e4bf4d8443"). InnerVolumeSpecName "kube-api-access-zzstt". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.873570 2328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65092e7-6746-42c0-a92c-d40091668e67" path="/var/lib/kubelet/pods/e65092e7-6746-42c0-a92c-d40091668e67/volumes"
Sep 23 10:34:25 addons-193618 kubelet[2328]: I0923 10:34:25.959725 2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zzstt\" (UniqueName: \"kubernetes.io/projected/638f01b9-2726-41db-a1a9-43e4bf4d8443-kube-api-access-zzstt\") on node \"addons-193618\" DevicePath \"\""
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.060172 2328 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpzfq\" (UniqueName: \"kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq\") pod \"cab49b7f-8d32-4017-9de8-d55b0ce0e2f3\" (UID: \"cab49b7f-8d32-4017-9de8-d55b0ce0e2f3\") "
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.062765 2328 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq" (OuterVolumeSpecName: "kube-api-access-qpzfq") pod "cab49b7f-8d32-4017-9de8-d55b0ce0e2f3" (UID: "cab49b7f-8d32-4017-9de8-d55b0ce0e2f3"). InnerVolumeSpecName "kube-api-access-qpzfq". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.161197 2328 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qpzfq\" (UniqueName: \"kubernetes.io/projected/cab49b7f-8d32-4017-9de8-d55b0ce0e2f3-kube-api-access-qpzfq\") on node \"addons-193618\" DevicePath \"\""
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.411668 2328 scope.go:117] "RemoveContainer" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.476587 2328 scope.go:117] "RemoveContainer" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
Sep 23 10:34:26 addons-193618 kubelet[2328]: E0923 10:34:26.477720 2328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b" containerID="940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.477772 2328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"} err="failed to get container status \"940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 940a4f85a0f45b8c04a8e12d184ba73d0109916b0f86fb0e8cfb7aa7c061e20b"
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.477798 2328 scope.go:117] "RemoveContainer" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.516649 2328 scope.go:117] "RemoveContainer" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
Sep 23 10:34:26 addons-193618 kubelet[2328]: E0923 10:34:26.517945 2328 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340" containerID="e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
Sep 23 10:34:26 addons-193618 kubelet[2328]: I0923 10:34:26.517992 2328 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"} err="failed to get container status \"e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340\": rpc error: code = Unknown desc = Error response from daemon: No such container: e97fe16d8c7de5e7432f511015030aa28e69d87bbfe16fec74c46dd0f9bb4340"
==> storage-provisioner [ae89bac99e2b] <==
I0923 10:21:46.364192 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 10:21:46.380027 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 10:21:46.380123 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 10:21:46.395620 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 10:21:46.400566 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507!
I0923 10:21:46.414459 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3680dca0-c009-45d8-b484-66aad8e6eddc", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507 became leader
I0923 10:21:46.501636 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-193618_86efd8e5-7d60-49ca-97ff-53ab0c106507!
E0923 10:34:15.210156 1 controller.go:1050] claim "bf1b17f7-808d-4288-b9cf-1d8eb86ef59c" in work queue no longer exists
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-193618 -n addons-193618
helpers_test.go:261: (dbg) Run: kubectl --context addons-193618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z: exit status 1 (114.9274ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-193618/192.168.49.2
Start Time: Mon, 23 Sep 2024 10:25:09 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml7d2 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ml7d2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m18s default-scheduler Successfully assigned default/busybox to addons-193618
Normal Pulling 7m45s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m44s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m44s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m34s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m4s (x21 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-nk96l" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-5cd9z" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-193618 describe pod busybox ingress-nginx-admission-create-nk96l ingress-nginx-admission-patch-5cd9z: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.51s)