=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.241651ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00375763s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004003839s
addons_test.go:342: (dbg) Run: kubectl --context addons-161312 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126021339s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-161312 ip
2024/08/28 17:05:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-161312 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-161312
helpers_test.go:235: (dbg) docker inspect addons-161312:
-- stdout --
[
{
"Id": "d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e",
"Created": "2024-08-28T16:52:03.925678699Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8846,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-08-28T16:52:04.120703728Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:2cc8dc59c2b679153d99f84cc70dab3e87225f8a0d04f61969b54714a9c4cd4d",
"ResolvConfPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/hostname",
"HostsPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/hosts",
"LogPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e-json.log",
"Name": "/addons-161312",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"addons-161312:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-161312",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156-init/diff:/var/lib/docker/overlay2/c18b9d3934b1670f096f7301a8e8724fdff2e22642728bcfca597c0633025683/diff",
"MergedDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/merged",
"UpperDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/diff",
"WorkDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-161312",
"Source": "/var/lib/docker/volumes/addons-161312/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-161312",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-161312",
"name.minikube.sigs.k8s.io": "addons-161312",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c0db1cffe61dacfb82935168bf6114819f7a3006d7a9f4dd00069c1383acf367",
"SandboxKey": "/var/run/docker/netns/c0db1cffe61d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-161312": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "31812af31d83671e38b71e7c104f91ddb3ac10c99e30160d02416dbcffc4b1aa",
"EndpointID": "554a67bcbe376898a0cfb0bae8452ebb9fea0a0b25de82bad87e5f732f3cd09d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-161312",
"d0d17dedb03f"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-161312 -n addons-161312
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-161312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 logs -n 25: (1.515036083s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | -p download-only-224586 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p download-only-224586 | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | -o=json --download-only | download-only-427986 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | -p download-only-427986 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p download-only-427986 | download-only-427986 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p download-only-224586 | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p download-only-427986 | download-only-427986 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | --download-only -p | download-docker-651207 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | download-docker-651207 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-651207 | download-docker-651207 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | --download-only -p | binary-mirror-834196 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | binary-mirror-834196 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:40931 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-834196 | binary-mirror-834196 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| addons | enable dashboard -p | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | addons-161312 | | | | | |
| addons | disable dashboard -p | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | addons-161312 | | | | | |
| start | -p addons-161312 --wait=true | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:55 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-161312 addons disable | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 16:55 UTC | 28 Aug 24 16:56 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-161312 addons disable | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:04 UTC | 28 Aug 24 17:04 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-161312 addons | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:04 UTC | 28 Aug 24 17:05 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-161312 addons | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
| | -p addons-161312 | | | | | |
| ip | addons-161312 ip | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
| addons | addons-161312 addons disable | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ssh | addons-161312 ssh cat | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
| | /opt/local-path-provisioner/pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7_default_test-pvc/file1 | | | | | |
| addons | addons-161312 addons disable | addons-161312 | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/28 16:51:37
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0828 16:51:37.905156 8351 out.go:345] Setting OutFile to fd 1 ...
I0828 16:51:37.905310 8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 16:51:37.905338 8351 out.go:358] Setting ErrFile to fd 2...
I0828 16:51:37.905344 8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 16:51:37.905607 8351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 16:51:37.906075 8351 out.go:352] Setting JSON to false
I0828 16:51:37.906872 8351 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2045,"bootTime":1724861853,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0828 16:51:37.906944 8351 start.go:139] virtualization:
I0828 16:51:37.910242 8351 out.go:177] * [addons-161312] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0828 16:51:37.913888 8351 out.go:177] - MINIKUBE_LOCATION=19529
I0828 16:51:37.913934 8351 notify.go:220] Checking for updates...
I0828 16:51:37.919249 8351 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0828 16:51:37.921854 8351 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
I0828 16:51:37.924407 8351 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
I0828 16:51:37.927053 8351 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0828 16:51:37.929835 8351 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0828 16:51:37.932679 8351 driver.go:392] Setting default libvirt URI to qemu:///system
I0828 16:51:37.955873 8351 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
I0828 16:51:37.955994 8351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0828 16:51:38.013913 8351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:38.005032488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0828 16:51:38.014024 8351 docker.go:307] overlay module found
I0828 16:51:38.016988 8351 out.go:177] * Using the docker driver based on user configuration
I0828 16:51:38.019445 8351 start.go:297] selected driver: docker
I0828 16:51:38.019466 8351 start.go:901] validating driver "docker" against <nil>
I0828 16:51:38.019481 8351 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0828 16:51:38.020107 8351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0828 16:51:38.094224 8351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:38.084433705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0828 16:51:38.094423 8351 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0828 16:51:38.094660 8351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0828 16:51:38.097336 8351 out.go:177] * Using Docker driver with root privileges
I0828 16:51:38.100097 8351 cni.go:84] Creating CNI manager for ""
I0828 16:51:38.100139 8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:51:38.100152 8351 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0828 16:51:38.100277 8351 start.go:340] cluster config:
{Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0828 16:51:38.103262 8351 out.go:177] * Starting "addons-161312" primary control-plane node in "addons-161312" cluster
I0828 16:51:38.106067 8351 cache.go:121] Beginning downloading kic base image for docker with docker
I0828 16:51:38.109137 8351 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
I0828 16:51:38.111748 8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0828 16:51:38.111831 8351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0828 16:51:38.111845 8351 cache.go:56] Caching tarball of preloaded images
I0828 16:51:38.111846 8351 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
I0828 16:51:38.111940 8351 preload.go:172] Found /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0828 16:51:38.111951 8351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0828 16:51:38.112301 8351 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json ...
I0828 16:51:38.112418 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json: {Name:mk19f9d2d3e637445941a22572c01984315af055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:38.128160 8351 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
I0828 16:51:38.128291 8351 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
I0828 16:51:38.128320 8351 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
I0828 16:51:38.128326 8351 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
I0828 16:51:38.128333 8351 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
I0828 16:51:38.128341 8351 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
I0828 16:51:55.502629 8351 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
I0828 16:51:55.502667 8351 cache.go:194] Successfully downloaded all kic artifacts
I0828 16:51:55.502708 8351 start.go:360] acquireMachinesLock for addons-161312: {Name:mk377363816433b11c915784309a449f180b325a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0828 16:51:55.502832 8351 start.go:364] duration metric: took 101.936µs to acquireMachinesLock for "addons-161312"
I0828 16:51:55.502863 8351 start.go:93] Provisioning new machine with config: &{Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0828 16:51:55.502947 8351 start.go:125] createHost starting for "" (driver="docker")
I0828 16:51:55.505360 8351 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0828 16:51:55.505647 8351 start.go:159] libmachine.API.Create for "addons-161312" (driver="docker")
I0828 16:51:55.505696 8351 client.go:168] LocalClient.Create starting
I0828 16:51:55.505862 8351 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem
I0828 16:51:56.608957 8351 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem
I0828 16:51:57.317898 8351 cli_runner.go:164] Run: docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0828 16:51:57.333999 8351 cli_runner.go:211] docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0828 16:51:57.334094 8351 network_create.go:284] running [docker network inspect addons-161312] to gather additional debugging logs...
I0828 16:51:57.334117 8351 cli_runner.go:164] Run: docker network inspect addons-161312
W0828 16:51:57.349754 8351 cli_runner.go:211] docker network inspect addons-161312 returned with exit code 1
I0828 16:51:57.349786 8351 network_create.go:287] error running [docker network inspect addons-161312]: docker network inspect addons-161312: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-161312 not found
I0828 16:51:57.349802 8351 network_create.go:289] output of [docker network inspect addons-161312]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-161312 not found
** /stderr **
I0828 16:51:57.349897 8351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0828 16:51:57.364728 8351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400176f090}
I0828 16:51:57.364775 8351 network_create.go:124] attempt to create docker network addons-161312 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0828 16:51:57.364833 8351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-161312 addons-161312
I0828 16:51:57.442845 8351 network_create.go:108] docker network addons-161312 192.168.49.0/24 created
I0828 16:51:57.442875 8351 kic.go:121] calculated static IP "192.168.49.2" for the "addons-161312" container
I0828 16:51:57.442946 8351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0828 16:51:57.461226 8351 cli_runner.go:164] Run: docker volume create addons-161312 --label name.minikube.sigs.k8s.io=addons-161312 --label created_by.minikube.sigs.k8s.io=true
I0828 16:51:57.480294 8351 oci.go:103] Successfully created a docker volume addons-161312
I0828 16:51:57.480384 8351 cli_runner.go:164] Run: docker run --rm --name addons-161312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --entrypoint /usr/bin/test -v addons-161312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
I0828 16:51:59.589271 8351 cli_runner.go:217] Completed: docker run --rm --name addons-161312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --entrypoint /usr/bin/test -v addons-161312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (2.108845177s)
I0828 16:51:59.589301 8351 oci.go:107] Successfully prepared a docker volume addons-161312
I0828 16:51:59.589326 8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0828 16:51:59.589345 8351 kic.go:194] Starting extracting preloaded images to volume ...
I0828 16:51:59.589424 8351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-161312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
I0828 16:52:03.862134 8351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-161312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.272655839s)
I0828 16:52:03.862164 8351 kic.go:203] duration metric: took 4.272816243s to extract preloaded images to volume ...
W0828 16:52:03.862307 8351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0828 16:52:03.862438 8351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0828 16:52:03.910797 8351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-161312 --name addons-161312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-161312 --network addons-161312 --ip 192.168.49.2 --volume addons-161312:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
I0828 16:52:04.289431 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Running}}
I0828 16:52:04.314235 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:04.338769 8351 cli_runner.go:164] Run: docker exec addons-161312 stat /var/lib/dpkg/alternatives/iptables
I0828 16:52:04.421474 8351 oci.go:144] the created container "addons-161312" has a running status.
I0828 16:52:04.421504 8351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa...
I0828 16:52:04.758403 8351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0828 16:52:04.783143 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:04.826922 8351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0828 16:52:04.826943 8351 kic_runner.go:114] Args: [docker exec --privileged addons-161312 chown docker:docker /home/docker/.ssh/authorized_keys]
I0828 16:52:04.924579 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:04.954310 8351 machine.go:93] provisionDockerMachine start ...
I0828 16:52:04.954407 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:05.004743 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:05.005006 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:05.005015 8351 main.go:141] libmachine: About to run SSH command:
hostname
I0828 16:52:05.170512 8351 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-161312
I0828 16:52:05.170554 8351 ubuntu.go:169] provisioning hostname "addons-161312"
I0828 16:52:05.170661 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:05.189110 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:05.189411 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:05.189426 8351 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-161312 && echo "addons-161312" | sudo tee /etc/hostname
I0828 16:52:05.343132 8351 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-161312
I0828 16:52:05.343253 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:05.370686 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:05.370921 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:05.370937 8351 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-161312' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-161312/g' /etc/hosts;
else
echo '127.0.1.1 addons-161312' | sudo tee -a /etc/hosts;
fi
fi
I0828 16:52:05.507534 8351 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0828 16:52:05.507560 8351 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19529-2268/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-2268/.minikube}
I0828 16:52:05.507591 8351 ubuntu.go:177] setting up certificates
I0828 16:52:05.507600 8351 provision.go:84] configureAuth start
I0828 16:52:05.507667 8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
I0828 16:52:05.524897 8351 provision.go:143] copyHostCerts
I0828 16:52:05.524988 8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/ca.pem (1078 bytes)
I0828 16:52:05.525118 8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/cert.pem (1123 bytes)
I0828 16:52:05.525194 8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/key.pem (1675 bytes)
I0828 16:52:05.525249 8351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem org=jenkins.addons-161312 san=[127.0.0.1 192.168.49.2 addons-161312 localhost minikube]
I0828 16:52:05.956600 8351 provision.go:177] copyRemoteCerts
I0828 16:52:05.956665 8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0828 16:52:05.956705 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:05.973956 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:06.073367 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0828 16:52:06.098902 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0828 16:52:06.125727 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0828 16:52:06.151800 8351 provision.go:87] duration metric: took 644.186777ms to configureAuth
I0828 16:52:06.151828 8351 ubuntu.go:193] setting minikube options for container-runtime
I0828 16:52:06.152031 8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 16:52:06.152094 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:06.168797 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:06.169071 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:06.169092 8351 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0828 16:52:06.303814 8351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0828 16:52:06.303833 8351 ubuntu.go:71] root file system type: overlay
I0828 16:52:06.303949 8351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0828 16:52:06.304021 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:06.321476 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:06.321728 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:06.321810 8351 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0828 16:52:06.473316 8351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0828 16:52:06.473429 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:06.491588 8351 main.go:141] libmachine: Using SSH client type: native
I0828 16:52:06.491844 8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0828 16:52:06.491866 8351 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0828 16:52:07.263521 8351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-08-12 11:49:05.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-08-28 16:52:06.466932266 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0828 16:52:07.263607 8351 machine.go:96] duration metric: took 2.309272218s to provisionDockerMachine
I0828 16:52:07.263635 8351 client.go:171] duration metric: took 11.757925354s to LocalClient.Create
I0828 16:52:07.263689 8351 start.go:167] duration metric: took 11.75804484s to libmachine.API.Create "addons-161312"
I0828 16:52:07.263740 8351 start.go:293] postStartSetup for "addons-161312" (driver="docker")
I0828 16:52:07.263767 8351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0828 16:52:07.263866 8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0828 16:52:07.263934 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:07.280743 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:07.377135 8351 ssh_runner.go:195] Run: cat /etc/os-release
I0828 16:52:07.380663 8351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0828 16:52:07.380698 8351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0828 16:52:07.380711 8351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0828 16:52:07.380739 8351 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0828 16:52:07.380755 8351 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-2268/.minikube/addons for local assets ...
I0828 16:52:07.380856 8351 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-2268/.minikube/files for local assets ...
I0828 16:52:07.380884 8351 start.go:296] duration metric: took 117.122488ms for postStartSetup
I0828 16:52:07.381219 8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
I0828 16:52:07.397829 8351 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json ...
I0828 16:52:07.398118 8351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0828 16:52:07.398172 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:07.415436 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:07.507976 8351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0828 16:52:07.512492 8351 start.go:128] duration metric: took 12.009522513s to createHost
I0828 16:52:07.512514 8351 start.go:83] releasing machines lock for "addons-161312", held for 12.009669583s
I0828 16:52:07.512586 8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
I0828 16:52:07.529538 8351 ssh_runner.go:195] Run: cat /version.json
I0828 16:52:07.529557 8351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0828 16:52:07.529593 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:07.529618 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:07.547836 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:07.548565 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:07.785303 8351 ssh_runner.go:195] Run: systemctl --version
I0828 16:52:07.789741 8351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0828 16:52:07.794137 8351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0828 16:52:07.824172 8351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0828 16:52:07.824308 8351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0828 16:52:07.853713 8351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0828 16:52:07.853792 8351 start.go:495] detecting cgroup driver to use...
I0828 16:52:07.853841 8351 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0828 16:52:07.853963 8351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0828 16:52:07.870786 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0828 16:52:07.881374 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0828 16:52:07.891392 8351 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0828 16:52:07.891464 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0828 16:52:07.902034 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0828 16:52:07.912886 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0828 16:52:07.923036 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0828 16:52:07.933344 8351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0828 16:52:07.942854 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0828 16:52:07.954110 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0828 16:52:07.964137 8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0828 16:52:07.974082 8351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0828 16:52:07.983000 8351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0828 16:52:07.991928 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:08.093805 8351 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0828 16:52:08.203511 8351 start.go:495] detecting cgroup driver to use...
I0828 16:52:08.203578 8351 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0828 16:52:08.203653 8351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0828 16:52:08.221185 8351 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0828 16:52:08.221302 8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0828 16:52:08.233770 8351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0828 16:52:08.251409 8351 ssh_runner.go:195] Run: which cri-dockerd
I0828 16:52:08.255212 8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0828 16:52:08.265000 8351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0828 16:52:08.285824 8351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0828 16:52:08.393852 8351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0828 16:52:08.486129 8351 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0828 16:52:08.486308 8351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0828 16:52:08.505926 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:08.604255 8351 ssh_runner.go:195] Run: sudo systemctl restart docker
I0828 16:52:08.878379 8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0828 16:52:08.891001 8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0828 16:52:08.904114 8351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0828 16:52:08.999635 8351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0828 16:52:09.104457 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:09.196709 8351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0828 16:52:09.211512 8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0828 16:52:09.223761 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:09.316587 8351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0828 16:52:09.400754 8351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0828 16:52:09.400845 8351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0828 16:52:09.404971 8351 start.go:563] Will wait 60s for crictl version
I0828 16:52:09.405078 8351 ssh_runner.go:195] Run: which crictl
I0828 16:52:09.408680 8351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0828 16:52:09.447652 8351 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.2
RuntimeApiVersion: v1
I0828 16:52:09.447762 8351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0828 16:52:09.469254 8351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0828 16:52:09.494723 8351 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
I0828 16:52:09.494848 8351 cli_runner.go:164] Run: docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0828 16:52:09.510923 8351 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0828 16:52:09.514879 8351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0828 16:52:09.526173 8351 kubeadm.go:883] updating cluster {Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0828 16:52:09.526302 8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0828 16:52:09.526363 8351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0828 16:52:09.545258 8351 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0828 16:52:09.545279 8351 docker.go:615] Images already preloaded, skipping extraction
I0828 16:52:09.545346 8351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0828 16:52:09.563624 8351 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0828 16:52:09.563650 8351 cache_images.go:84] Images are preloaded, skipping loading
I0828 16:52:09.563677 8351 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
I0828 16:52:09.563783 8351 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-161312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0828 16:52:09.563855 8351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0828 16:52:09.613500 8351 cni.go:84] Creating CNI manager for ""
I0828 16:52:09.613524 8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:52:09.613534 8351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0828 16:52:09.613552 8351 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-161312 NodeName:addons-161312 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0828 16:52:09.613703 8351 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-161312"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0828 16:52:09.613777 8351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0828 16:52:09.624345 8351 binaries.go:44] Found k8s binaries, skipping transfer
I0828 16:52:09.624416 8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0828 16:52:09.633257 8351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0828 16:52:09.654397 8351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0828 16:52:09.673800 8351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0828 16:52:09.692388 8351 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0828 16:52:09.695865 8351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0828 16:52:09.707257 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:09.789493 8351 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0828 16:52:09.803549 8351 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312 for IP: 192.168.49.2
I0828 16:52:09.803567 8351 certs.go:194] generating shared ca certs ...
I0828 16:52:09.803585 8351 certs.go:226] acquiring lock for ca certs: {Name:mk4271d0c0edfadb28da5225f3695d190103a80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:09.803716 8351 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key
I0828 16:52:10.200053 8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt ...
I0828 16:52:10.200089 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt: {Name:mkf3724c4bba2c3d496e6bccd2159bfc8c93663f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.200324 8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key ...
I0828 16:52:10.200336 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key: {Name:mk31c6a00d734d5c3c2cef1983b97aeef28d7e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.200416 8351 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key
I0828 16:52:10.377737 8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt ...
I0828 16:52:10.377766 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt: {Name:mkb7d2fe42c83c663df3c323544682df706cfa10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.377946 8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key ...
I0828 16:52:10.377959 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key: {Name:mkde1bfa5876cc86e88933dc4f11e26338aec186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.378044 8351 certs.go:256] generating profile certs ...
I0828 16:52:10.378107 8351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key
I0828 16:52:10.378126 8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt with IP's: []
I0828 16:52:10.668514 8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt ...
I0828 16:52:10.668546 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: {Name:mk0bf75a68352223126840db807ae3de1785496f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.668761 8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key ...
I0828 16:52:10.668776 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key: {Name:mke5437844e3e8336640394824ad2200149c1ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:10.668901 8351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2
I0828 16:52:10.668924 8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0828 16:52:11.051012 8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 ...
I0828 16:52:11.051047 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2: {Name:mkc6a19c67206bdf37dd83ab3e556e81ed6bab1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:11.051240 8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2 ...
I0828 16:52:11.051257 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2: {Name:mkc5d85cd28a5e2d26534e1c655c148fbbccba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:11.051360 8351 certs.go:381] copying /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 -> /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt
I0828 16:52:11.051448 8351 certs.go:385] copying /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2 -> /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key
I0828 16:52:11.051503 8351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key
I0828 16:52:11.051527 8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt with IP's: []
I0828 16:52:11.226673 8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt ...
I0828 16:52:11.226703 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt: {Name:mk568c76b92e501f69dd6bbe51c69bbf287935ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:11.226869 8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key ...
I0828 16:52:11.226890 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key: {Name:mk5718a55b649ac4323a7c85cd30a6e29a7704f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:11.227065 8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem (1679 bytes)
I0828 16:52:11.227107 8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem (1078 bytes)
I0828 16:52:11.227134 8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem (1123 bytes)
I0828 16:52:11.227164 8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem (1675 bytes)
I0828 16:52:11.227793 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0828 16:52:11.254064 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0828 16:52:11.279495 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0828 16:52:11.304326 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0828 16:52:11.330904 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0828 16:52:11.360132 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0828 16:52:11.391428 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0828 16:52:11.418601 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0828 16:52:11.447434 8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0828 16:52:11.472808 8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0828 16:52:11.491782 8351 ssh_runner.go:195] Run: openssl version
I0828 16:52:11.497467 8351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0828 16:52:11.507661 8351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0828 16:52:11.511289 8351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
I0828 16:52:11.511428 8351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0828 16:52:11.518696 8351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0828 16:52:11.528325 8351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0828 16:52:11.532016 8351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0828 16:52:11.532060 8351 kubeadm.go:392] StartCluster: {Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0828 16:52:11.532193 8351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0828 16:52:11.550087 8351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0828 16:52:11.559062 8351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0828 16:52:11.568438 8351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0828 16:52:11.568503 8351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0828 16:52:11.578976 8351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0828 16:52:11.578994 8351 kubeadm.go:157] found existing configuration files:
I0828 16:52:11.579045 8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0828 16:52:11.587614 8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0828 16:52:11.587676 8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0828 16:52:11.595871 8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0828 16:52:11.605211 8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0828 16:52:11.605284 8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0828 16:52:11.613630 8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0828 16:52:11.622499 8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0828 16:52:11.622591 8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0828 16:52:11.631247 8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0828 16:52:11.639958 8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0828 16:52:11.640046 8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0828 16:52:11.648441 8351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0828 16:52:11.688977 8351 kubeadm.go:310] W0828 16:52:11.688316 1805 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0828 16:52:11.691165 8351 kubeadm.go:310] W0828 16:52:11.690587 1805 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0828 16:52:11.714959 8351 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
I0828 16:52:11.774326 8351 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0828 16:52:30.136239 8351 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0828 16:52:30.136351 8351 kubeadm.go:310] [preflight] Running pre-flight checks
I0828 16:52:30.136469 8351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0828 16:52:30.136543 8351 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1068-aws[0m
I0828 16:52:30.136605 8351 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0828 16:52:30.136672 8351 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0828 16:52:30.136746 8351 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0828 16:52:30.136815 8351 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0828 16:52:30.136889 8351 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0828 16:52:30.136957 8351 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0828 16:52:30.137032 8351 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0828 16:52:30.137095 8351 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0828 16:52:30.137168 8351 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0828 16:52:30.137237 8351 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0828 16:52:30.137332 8351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0828 16:52:30.137450 8351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0828 16:52:30.137567 8351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0828 16:52:30.137646 8351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0828 16:52:30.141948 8351 out.go:235] - Generating certificates and keys ...
I0828 16:52:30.142128 8351 kubeadm.go:310] [certs] Using existing ca certificate authority
I0828 16:52:30.142232 8351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0828 16:52:30.142335 8351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0828 16:52:30.142445 8351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0828 16:52:30.142528 8351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0828 16:52:30.142591 8351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0828 16:52:30.142652 8351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0828 16:52:30.142778 8351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-161312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0828 16:52:30.142842 8351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0828 16:52:30.142961 8351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-161312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0828 16:52:30.143036 8351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0828 16:52:30.143147 8351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0828 16:52:30.143221 8351 kubeadm.go:310] [certs] Generating "sa" key and public key
I0828 16:52:30.143277 8351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0828 16:52:30.143562 8351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0828 16:52:30.143632 8351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0828 16:52:30.143703 8351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0828 16:52:30.143833 8351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0828 16:52:30.143898 8351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0828 16:52:30.144000 8351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0828 16:52:30.144097 8351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0828 16:52:30.148080 8351 out.go:235] - Booting up control plane ...
I0828 16:52:30.148201 8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0828 16:52:30.148294 8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0828 16:52:30.148366 8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0828 16:52:30.148531 8351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0828 16:52:30.148631 8351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0828 16:52:30.148674 8351 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0828 16:52:30.148808 8351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0828 16:52:30.148917 8351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0828 16:52:30.148981 8351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501767605s
I0828 16:52:30.149056 8351 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0828 16:52:30.149117 8351 kubeadm.go:310] [api-check] The API server is healthy after 7.001934983s
I0828 16:52:30.149223 8351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0828 16:52:30.149346 8351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0828 16:52:30.149406 8351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0828 16:52:30.149582 8351 kubeadm.go:310] [mark-control-plane] Marking the node addons-161312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0828 16:52:30.149641 8351 kubeadm.go:310] [bootstrap-token] Using token: ny7wfw.cf9xojta6jouq4ye
I0828 16:52:30.153185 8351 out.go:235] - Configuring RBAC rules ...
I0828 16:52:30.153340 8351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0828 16:52:30.153426 8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0828 16:52:30.153564 8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0828 16:52:30.153699 8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0828 16:52:30.153817 8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0828 16:52:30.153903 8351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0828 16:52:30.154019 8351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0828 16:52:30.154064 8351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0828 16:52:30.154111 8351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0828 16:52:30.154119 8351 kubeadm.go:310]
I0828 16:52:30.154176 8351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0828 16:52:30.154186 8351 kubeadm.go:310]
I0828 16:52:30.154278 8351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0828 16:52:30.154284 8351 kubeadm.go:310]
I0828 16:52:30.154310 8351 kubeadm.go:310] mkdir -p $HOME/.kube
I0828 16:52:30.154371 8351 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0828 16:52:30.154423 8351 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0828 16:52:30.154438 8351 kubeadm.go:310]
I0828 16:52:30.154493 8351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0828 16:52:30.154506 8351 kubeadm.go:310]
I0828 16:52:30.154553 8351 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0828 16:52:30.154560 8351 kubeadm.go:310]
I0828 16:52:30.154612 8351 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0828 16:52:30.154688 8351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0828 16:52:30.154758 8351 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0828 16:52:30.154763 8351 kubeadm.go:310]
I0828 16:52:30.154844 8351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0828 16:52:30.154918 8351 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0828 16:52:30.154924 8351 kubeadm.go:310]
I0828 16:52:30.155005 8351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wfw.cf9xojta6jouq4ye \
I0828 16:52:30.155110 8351 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:32297ca7f0abb6eea50ed3c14eaeba642f0933631e0d91616c2b0d22f9e1a84c \
I0828 16:52:30.155137 8351 kubeadm.go:310] --control-plane
I0828 16:52:30.155141 8351 kubeadm.go:310]
I0828 16:52:30.155223 8351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0828 16:52:30.155230 8351 kubeadm.go:310]
I0828 16:52:30.155367 8351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wfw.cf9xojta6jouq4ye \
I0828 16:52:30.155520 8351 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:32297ca7f0abb6eea50ed3c14eaeba642f0933631e0d91616c2b0d22f9e1a84c
I0828 16:52:30.155548 8351 cni.go:84] Creating CNI manager for ""
I0828 16:52:30.155563 8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:52:30.159046 8351 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0828 16:52:30.161089 8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0828 16:52:30.173400 8351 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0828 16:52:30.207426 8351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0828 16:52:30.207634 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-161312 minikube.k8s.io/updated_at=2024_08_28T16_52_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-161312 minikube.k8s.io/primary=true
I0828 16:52:30.207693 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:30.217266 8351 ops.go:34] apiserver oom_adj: -16
I0828 16:52:30.328954 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:30.829886 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:31.329058 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:31.829802 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:32.329588 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:32.829039 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:33.329140 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:33.829639 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:34.329700 8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:34.464408 8351 kubeadm.go:1113] duration metric: took 4.256866605s to wait for elevateKubeSystemPrivileges
I0828 16:52:34.464440 8351 kubeadm.go:394] duration metric: took 22.932382063s to StartCluster
I0828 16:52:34.464457 8351 settings.go:142] acquiring lock: {Name:mke1e724d192d07afd5e039ebae8b3217691ebf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:34.464570 8351 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19529-2268/kubeconfig
I0828 16:52:34.464984 8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/kubeconfig: {Name:mk783f27e67c290c3cb897056b28951084501c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:34.465181 8351 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0828 16:52:34.465290 8351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0828 16:52:34.465586 8351 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0828 16:52:34.465681 8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 16:52:34.465721 8351 addons.go:69] Setting default-storageclass=true in profile "addons-161312"
I0828 16:52:34.465774 8351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-161312"
I0828 16:52:34.465707 8351 addons.go:69] Setting yakd=true in profile "addons-161312"
I0828 16:52:34.465894 8351 addons.go:234] Setting addon yakd=true in "addons-161312"
I0828 16:52:34.465938 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.466122 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.466565 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.466949 8351 addons.go:69] Setting gcp-auth=true in profile "addons-161312"
I0828 16:52:34.466989 8351 mustload.go:65] Loading cluster: addons-161312
I0828 16:52:34.467162 8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 16:52:34.467432 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.469623 8351 addons.go:69] Setting ingress=true in profile "addons-161312"
I0828 16:52:34.469663 8351 addons.go:234] Setting addon ingress=true in "addons-161312"
I0828 16:52:34.469703 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.470299 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.465714 8351 addons.go:69] Setting cloud-spanner=true in profile "addons-161312"
I0828 16:52:34.471650 8351 addons.go:234] Setting addon cloud-spanner=true in "addons-161312"
I0828 16:52:34.471695 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.465718 8351 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-161312"
I0828 16:52:34.471880 8351 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-161312"
I0828 16:52:34.471904 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.471989 8351 addons.go:69] Setting ingress-dns=true in profile "addons-161312"
I0828 16:52:34.472011 8351 addons.go:234] Setting addon ingress-dns=true in "addons-161312"
I0828 16:52:34.472038 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.472152 8351 addons.go:69] Setting inspektor-gadget=true in profile "addons-161312"
I0828 16:52:34.472168 8351 addons.go:234] Setting addon inspektor-gadget=true in "addons-161312"
I0828 16:52:34.472183 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.472324 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.472597 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.473983 8351 out.go:177] * Verifying Kubernetes components...
I0828 16:52:34.477315 8351 addons.go:69] Setting metrics-server=true in profile "addons-161312"
I0828 16:52:34.477355 8351 addons.go:234] Setting addon metrics-server=true in "addons-161312"
I0828 16:52:34.477392 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.477849 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.488638 8351 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-161312"
I0828 16:52:34.488683 8351 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-161312"
I0828 16:52:34.488719 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.489162 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.491929 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.515397 8351 addons.go:69] Setting registry=true in profile "addons-161312"
I0828 16:52:34.515443 8351 addons.go:234] Setting addon registry=true in "addons-161312"
I0828 16:52:34.515492 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.515950 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.519833 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.532833 8351 addons.go:69] Setting storage-provisioner=true in profile "addons-161312"
I0828 16:52:34.533439 8351 addons.go:234] Setting addon storage-provisioner=true in "addons-161312"
I0828 16:52:34.533504 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.534412 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.538382 8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0828 16:52:34.565998 8351 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-161312"
I0828 16:52:34.566102 8351 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-161312"
I0828 16:52:34.566555 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.615272 8351 addons.go:69] Setting volcano=true in profile "addons-161312"
I0828 16:52:34.623799 8351 addons.go:234] Setting addon volcano=true in "addons-161312"
I0828 16:52:34.623850 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.624304 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.628970 8351 addons.go:234] Setting addon default-storageclass=true in "addons-161312"
I0828 16:52:34.629012 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.629430 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.638121 8351 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0828 16:52:34.639802 8351 addons.go:69] Setting volumesnapshots=true in profile "addons-161312"
I0828 16:52:34.639879 8351 addons.go:234] Setting addon volumesnapshots=true in "addons-161312"
I0828 16:52:34.639922 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.640394 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.642142 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.642299 8351 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0828 16:52:34.650980 8351 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0828 16:52:34.653737 8351 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0828 16:52:34.653813 8351 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0828 16:52:34.654136 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.655340 8351 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
I0828 16:52:34.655913 8351 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0828 16:52:34.657580 8351 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0828 16:52:34.657645 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0828 16:52:34.657743 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.659563 8351 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0828 16:52:34.659584 8351 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0828 16:52:34.659646 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.687950 8351 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0828 16:52:34.688024 8351 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0828 16:52:34.688130 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.717594 8351 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0828 16:52:34.720290 8351 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0828 16:52:34.723404 8351 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0828 16:52:34.723511 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0828 16:52:34.723614 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.748191 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0828 16:52:34.750271 8351 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0828 16:52:34.752270 8351 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0828 16:52:34.754441 8351 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0828 16:52:34.756170 8351 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0828 16:52:34.756469 8351 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0828 16:52:34.756504 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0828 16:52:34.756597 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.758886 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0828 16:52:34.760148 8351 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:34.760196 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0828 16:52:34.760290 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.778196 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0828 16:52:34.780213 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0828 16:52:34.782206 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0828 16:52:34.787515 8351 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0828 16:52:34.789691 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0828 16:52:34.789713 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0828 16:52:34.789787 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.812082 8351 out.go:177] - Using image docker.io/registry:2.8.3
I0828 16:52:34.813787 8351 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0828 16:52:34.815479 8351 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0828 16:52:34.815499 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0828 16:52:34.815569 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.838580 8351 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0828 16:52:34.840420 8351 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0828 16:52:34.840443 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0828 16:52:34.840506 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.897866 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:34.901992 8351 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-161312"
I0828 16:52:34.902035 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:34.902440 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:34.903431 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:34.916958 8351 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:34.916978 8351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0828 16:52:34.917038 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.937046 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:34.940273 8351 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0828 16:52:34.940457 8351 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0828 16:52:34.942197 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0828 16:52:34.942220 8351 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0828 16:52:34.942291 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.945186 8351 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0828 16:52:34.947132 8351 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0828 16:52:34.950245 8351 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0828 16:52:34.950344 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0828 16:52:34.950450 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:34.977586 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:34.995966 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.023763 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.024306 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.040156 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.068409 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.080502 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.107006 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.135547 8351 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0828 16:52:35.137864 8351 out.go:177] - Using image docker.io/busybox:stable
I0828 16:52:35.144659 8351 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0828 16:52:35.144684 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0828 16:52:35.144752 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:35.145808 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
W0828 16:52:35.152985 8351 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0828 16:52:35.153075 8351 retry.go:31] will retry after 329.757604ms: ssh: handshake failed: EOF
I0828 16:52:35.154393 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.186974 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:35.495099 8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0828 16:52:35.495161 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0828 16:52:35.519837 8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0828 16:52:35.519871 8351 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0828 16:52:35.544456 8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0828 16:52:35.544512 8351 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0828 16:52:35.576115 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0828 16:52:35.686686 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0828 16:52:35.710157 8351 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0828 16:52:35.710199 8351 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0828 16:52:35.819354 8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0828 16:52:35.819442 8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0828 16:52:35.895140 8351 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.356716908s)
I0828 16:52:35.895230 8351 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0828 16:52:35.895351 8351 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.430043624s)
I0828 16:52:35.895532 8351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0828 16:52:35.939652 8351 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0828 16:52:35.939726 8351 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0828 16:52:35.948285 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:35.992454 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0828 16:52:36.088628 8351 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0828 16:52:36.088707 8351 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0828 16:52:36.091274 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:36.251102 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0828 16:52:36.318604 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0828 16:52:36.348416 8351 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0828 16:52:36.348489 8351 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0828 16:52:36.356880 8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0828 16:52:36.356955 8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0828 16:52:36.421646 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0828 16:52:36.437657 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0828 16:52:36.437730 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0828 16:52:36.446567 8351 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0828 16:52:36.446642 8351 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0828 16:52:36.462475 8351 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0828 16:52:36.462542 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0828 16:52:36.707683 8351 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0828 16:52:36.707758 8351 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0828 16:52:36.747249 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0828 16:52:36.747389 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0828 16:52:36.757960 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0828 16:52:36.824553 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0828 16:52:36.835859 8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0828 16:52:36.835933 8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0828 16:52:36.882365 8351 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0828 16:52:36.882442 8351 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0828 16:52:36.938439 8351 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0828 16:52:36.938509 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0828 16:52:37.024149 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0828 16:52:37.024233 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0828 16:52:37.108518 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0828 16:52:37.108595 8351 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0828 16:52:37.288907 8351 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0828 16:52:37.288987 8351 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0828 16:52:37.388830 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0828 16:52:37.388906 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0828 16:52:37.437032 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0828 16:52:37.580294 8351 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:37.580313 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0828 16:52:37.745751 8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0828 16:52:37.745776 8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0828 16:52:37.812262 8351 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0828 16:52:37.812334 8351 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0828 16:52:37.907344 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0828 16:52:37.907416 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0828 16:52:37.914594 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:38.073893 8351 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0828 16:52:38.073969 8351 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0828 16:52:38.182383 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0828 16:52:38.182456 8351 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0828 16:52:38.335523 8351 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0828 16:52:38.335595 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0828 16:52:38.634328 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0828 16:52:38.634389 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0828 16:52:38.709942 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0828 16:52:38.985489 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0828 16:52:38.985564 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0828 16:52:39.377666 8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0828 16:52:39.377740 8351 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0828 16:52:39.976638 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0828 16:52:40.577577 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.00142585s)
I0828 16:52:40.577608 8351 addons.go:475] Verifying addon metrics-server=true in "addons-161312"
I0828 16:52:40.577647 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.890938259s)
I0828 16:52:40.577696 8351 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.682146261s)
I0828 16:52:40.577706 8351 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0828 16:52:40.578809 8351 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.683549884s)
I0828 16:52:40.579560 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.631170691s)
I0828 16:52:40.580110 8351 node_ready.go:35] waiting up to 6m0s for node "addons-161312" to be "Ready" ...
I0828 16:52:40.623553 8351 node_ready.go:49] node "addons-161312" has status "Ready":"True"
I0828 16:52:40.623579 8351 node_ready.go:38] duration metric: took 43.420322ms for node "addons-161312" to be "Ready" ...
I0828 16:52:40.623590 8351 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0828 16:52:40.661413 8351 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.081603 8351 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-161312" context rescaled to 1 replicas
I0828 16:52:41.171955 8351 pod_ready.go:93] pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.172028 8351 pod_ready.go:82] duration metric: took 510.537652ms for pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.172056 8351 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.178235 8351 pod_ready.go:93] pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.178310 8351 pod_ready.go:82] duration metric: took 6.233972ms for pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.178336 8351 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.184621 8351 pod_ready.go:93] pod "etcd-addons-161312" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.184703 8351 pod_ready.go:82] duration metric: took 6.336958ms for pod "etcd-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.184729 8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.190444 8351 pod_ready.go:93] pod "kube-apiserver-addons-161312" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.190514 8351 pod_ready.go:82] duration metric: took 5.748782ms for pod "kube-apiserver-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.190539 8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.384216 8351 pod_ready.go:93] pod "kube-controller-manager-addons-161312" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.384288 8351 pod_ready.go:82] duration metric: took 193.726836ms for pod "kube-controller-manager-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.384404 8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j6f7q" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.652341 8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0828 16:52:41.652511 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:41.679421 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:41.783559 8351 pod_ready.go:93] pod "kube-proxy-j6f7q" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:41.783581 8351 pod_ready.go:82] duration metric: took 399.151269ms for pod "kube-proxy-j6f7q" in "kube-system" namespace to be "Ready" ...
I0828 16:52:41.783591 8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:42.198054 8351 pod_ready.go:93] pod "kube-scheduler-addons-161312" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:42.198084 8351 pod_ready.go:82] duration metric: took 414.485212ms for pod "kube-scheduler-addons-161312" in "kube-system" namespace to be "Ready" ...
I0828 16:52:42.198098 8351 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace to be "Ready" ...
I0828 16:52:42.334809 8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0828 16:52:42.747512 8351 addons.go:234] Setting addon gcp-auth=true in "addons-161312"
I0828 16:52:42.747558 8351 host.go:66] Checking if "addons-161312" exists ...
I0828 16:52:42.748020 8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
I0828 16:52:42.768607 8351 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0828 16:52:42.768672 8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
I0828 16:52:42.797548 8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
I0828 16:52:44.206140 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:44.913665 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.921126115s)
I0828 16:52:44.913701 8351 addons.go:475] Verifying addon ingress=true in "addons-161312"
I0828 16:52:44.913873 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.82243821s)
I0828 16:52:44.913923 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.662733647s)
I0828 16:52:44.914017 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.595346076s)
I0828 16:52:44.914065 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.492351748s)
I0828 16:52:44.916869 8351 out.go:177] * Verifying ingress addon...
I0828 16:52:44.920310 8351 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0828 16:52:44.929913 8351 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0828 16:52:44.929942 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:45.426134 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:45.926162 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:46.208011 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:46.425610 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:46.938507 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:47.447729 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:47.946048 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:48.074075 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.316018607s)
I0828 16:52:48.074150 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.249535308s)
I0828 16:52:48.074172 8351 addons.go:475] Verifying addon registry=true in "addons-161312"
I0828 16:52:48.074485 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.637282724s)
I0828 16:52:48.074726 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.160055057s)
W0828 16:52:48.074760 8351 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0828 16:52:48.074806 8351 retry.go:31] will retry after 247.600913ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0828 16:52:48.074893 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.364863952s)
I0828 16:52:48.076437 8351 out.go:177] * Verifying registry addon...
I0828 16:52:48.076590 8351 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-161312 service yakd-dashboard -n yakd-dashboard
I0828 16:52:48.079088 8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0828 16:52:48.180586 8351 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0828 16:52:48.180614 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:48.297495 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:48.322746 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:48.457264 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:48.551330 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.574634954s)
I0828 16:52:48.551368 8351 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-161312"
I0828 16:52:48.551419 8351 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.782791574s)
I0828 16:52:48.554344 8351 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0828 16:52:48.554497 8351 out.go:177] * Verifying csi-hostpath-driver addon...
I0828 16:52:48.556249 8351 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0828 16:52:48.558559 8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0828 16:52:48.558624 8351 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0828 16:52:48.559672 8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0828 16:52:48.597456 8351 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0828 16:52:48.597483 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:48.664960 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:48.739789 8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0828 16:52:48.739816 8351 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0828 16:52:48.855066 8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0828 16:52:48.855092 8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0828 16:52:48.914918 8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0828 16:52:48.927686 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:49.064704 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:49.083985 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:49.424915 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:49.568899 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:49.582695 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:49.925837 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:50.066548 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:50.086531 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:50.424785 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:50.514007 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.191212437s)
I0828 16:52:50.566300 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:50.591517 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:50.717917 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:50.757658 8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.842687037s)
I0828 16:52:50.760846 8351 addons.go:475] Verifying addon gcp-auth=true in "addons-161312"
I0828 16:52:50.763663 8351 out.go:177] * Verifying gcp-auth addon...
I0828 16:52:50.766976 8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0828 16:52:50.770995 8351 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0828 16:52:50.924937 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:51.064672 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:51.083255 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:51.427613 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:51.564888 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:51.582923 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:51.928592 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:52.064566 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:52.083914 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:52.428704 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:52.565391 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:52.582937 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:52.924417 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:53.065887 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:53.083702 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:53.204645 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:53.424304 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:53.565674 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:53.584291 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:53.924935 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:54.064845 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:54.083505 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:54.425134 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:54.565836 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:54.583142 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:54.925037 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:55.066805 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:55.085534 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:55.206862 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:55.424749 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:55.566374 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:55.584115 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:55.924987 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:56.064613 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:56.083417 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:56.424728 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:56.564591 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:56.583291 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:56.926448 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:57.065056 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:57.083826 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:57.424341 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:57.565053 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:57.583464 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:57.710013 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:57.924496 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:58.064000 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:58.082883 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:58.424497 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:58.565492 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:58.582748 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:58.925516 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:59.064857 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:59.083026 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:59.424259 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:52:59.565171 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:59.582806 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:59.924887 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:00.232979 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:00.312422 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:00.314684 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:00.438523 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:00.564528 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:00.583031 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:00.925215 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:01.064866 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:01.084066 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:01.425352 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:01.566427 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:01.584635 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:01.925489 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:02.067896 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:02.083971 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:02.425049 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:02.566097 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:02.584106 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:02.704465 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:02.924915 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:03.065130 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:03.082987 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:03.425690 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:03.564981 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:03.582383 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:03.933770 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:04.065472 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:04.083092 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:04.424917 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:04.564788 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:04.583379 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:04.704876 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:04.924661 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:05.070396 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:05.083542 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:05.428566 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:05.565674 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:05.584116 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:05.925323 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:06.065449 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:06.082893 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:06.425980 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:06.565301 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:06.583397 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:06.705527 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:06.924839 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:07.064554 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:07.083599 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:07.424811 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:07.565107 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:07.583145 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:07.925716 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:08.067310 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:08.084532 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:08.425474 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:08.569691 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:08.583591 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:08.924745 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:09.069161 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:09.084005 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:09.204114 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:09.425207 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:09.565147 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:09.582891 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:09.924660 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:10.080195 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:10.100089 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:10.424684 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:10.564240 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:10.583906 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:10.925288 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:11.065836 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:11.083357 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:11.204415 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:11.424871 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:11.565357 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:11.583373 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:11.924799 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:12.066117 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:12.083110 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:12.425656 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:12.565540 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:12.583539 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:12.925123 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:13.065154 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:13.084472 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:13.205256 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:13.424554 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:13.566309 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:13.583338 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:53:13.925284 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:14.065465 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:14.083199 8351 kapi.go:107] duration metric: took 26.004110235s to wait for kubernetes.io/minikube-addons=registry ...
I0828 16:53:14.430252 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:14.565684 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:14.929431 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:15.069884 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:15.207201 8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
I0828 16:53:15.434212 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:15.565387 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:15.930142 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:16.070799 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:16.425151 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:16.565261 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:16.925050 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:17.066514 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:17.205923 8351 pod_ready.go:93] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"True"
I0828 16:53:17.205996 8351 pod_ready.go:82] duration metric: took 35.007888776s for pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace to be "Ready" ...
I0828 16:53:17.206021 8351 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace to be "Ready" ...
I0828 16:53:17.213015 8351 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace has status "Ready":"True"
I0828 16:53:17.213087 8351 pod_ready.go:82] duration metric: took 7.042981ms for pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace to be "Ready" ...
I0828 16:53:17.213117 8351 pod_ready.go:39] duration metric: took 36.589515501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0828 16:53:17.213164 8351 api_server.go:52] waiting for apiserver process to appear ...
I0828 16:53:17.213257 8351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:53:17.233327 8351 api_server.go:72] duration metric: took 42.768099334s to wait for apiserver process to appear ...
I0828 16:53:17.233388 8351 api_server.go:88] waiting for apiserver healthz status ...
I0828 16:53:17.233432 8351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0828 16:53:17.242507 8351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0828 16:53:17.243684 8351 api_server.go:141] control plane version: v1.31.0
I0828 16:53:17.243706 8351 api_server.go:131] duration metric: took 10.288445ms to wait for apiserver health ...
I0828 16:53:17.243714 8351 system_pods.go:43] waiting for kube-system pods to appear ...
I0828 16:53:17.254976 8351 system_pods.go:59] 17 kube-system pods found
I0828 16:53:17.255070 8351 system_pods.go:61] "coredns-6f6b679f8f-hcl4z" [9a756596-b7bf-46f4-980d-8062d8e5aa1f] Running
I0828 16:53:17.255097 8351 system_pods.go:61] "csi-hostpath-attacher-0" [c4679fdb-0197-47d9-b556-c74ff2f7b4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0828 16:53:17.255133 8351 system_pods.go:61] "csi-hostpath-resizer-0" [e762894a-c229-4849-94fb-b1068d4897a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0828 16:53:17.255158 8351 system_pods.go:61] "csi-hostpathplugin-772lg" [5b927797-2d55-4d8e-982a-f8f23f5dd1e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0828 16:53:17.255178 8351 system_pods.go:61] "etcd-addons-161312" [e35c016f-6a9a-4ca9-8ef8-138c9453a446] Running
I0828 16:53:17.255198 8351 system_pods.go:61] "kube-apiserver-addons-161312" [533f4348-327f-4995-8b03-e3b792d2cb4e] Running
I0828 16:53:17.255217 8351 system_pods.go:61] "kube-controller-manager-addons-161312" [0ac59ac9-39b6-474d-b926-ba33667a7ad3] Running
I0828 16:53:17.255251 8351 system_pods.go:61] "kube-ingress-dns-minikube" [82ef5f44-8529-4660-9df8-d1fd1e34055c] Running
I0828 16:53:17.255270 8351 system_pods.go:61] "kube-proxy-j6f7q" [df5e438a-974c-4830-943c-d4b8a0c301cb] Running
I0828 16:53:17.255288 8351 system_pods.go:61] "kube-scheduler-addons-161312" [2fb79932-e90a-44fc-831a-7f9b52a380bc] Running
I0828 16:53:17.255328 8351 system_pods.go:61] "metrics-server-84c5f94fbc-2gwmk" [dd1f5b27-27c7-4ddf-973e-855eb2bbbe37] Running
I0828 16:53:17.255350 8351 system_pods.go:61] "nvidia-device-plugin-daemonset-lbb78" [4b16be02-3cce-4ec1-9435-fabfc1c55ab7] Running
I0828 16:53:17.255369 8351 system_pods.go:61] "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
I0828 16:53:17.255387 8351 system_pods.go:61] "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
I0828 16:53:17.255409 8351 system_pods.go:61] "snapshot-controller-56fcc65765-h2qqv" [1a3268d8-8a1e-4024-a144-00c9e97e7db0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:53:17.255439 8351 system_pods.go:61] "snapshot-controller-56fcc65765-qvk9j" [572c18e2-432c-42ea-bec9-9ef3707837c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:53:17.255464 8351 system_pods.go:61] "storage-provisioner" [496c4e27-2d97-4e7f-acac-7e8dcd1adbc7] Running
I0828 16:53:17.255493 8351 system_pods.go:74] duration metric: took 11.772389ms to wait for pod list to return data ...
I0828 16:53:17.255516 8351 default_sa.go:34] waiting for default service account to be created ...
I0828 16:53:17.258777 8351 default_sa.go:45] found service account: "default"
I0828 16:53:17.258835 8351 default_sa.go:55] duration metric: took 3.292314ms for default service account to be created ...
I0828 16:53:17.258866 8351 system_pods.go:116] waiting for k8s-apps to be running ...
I0828 16:53:17.268524 8351 system_pods.go:86] 17 kube-system pods found
I0828 16:53:17.268561 8351 system_pods.go:89] "coredns-6f6b679f8f-hcl4z" [9a756596-b7bf-46f4-980d-8062d8e5aa1f] Running
I0828 16:53:17.268571 8351 system_pods.go:89] "csi-hostpath-attacher-0" [c4679fdb-0197-47d9-b556-c74ff2f7b4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0828 16:53:17.268579 8351 system_pods.go:89] "csi-hostpath-resizer-0" [e762894a-c229-4849-94fb-b1068d4897a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0828 16:53:17.268586 8351 system_pods.go:89] "csi-hostpathplugin-772lg" [5b927797-2d55-4d8e-982a-f8f23f5dd1e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0828 16:53:17.268592 8351 system_pods.go:89] "etcd-addons-161312" [e35c016f-6a9a-4ca9-8ef8-138c9453a446] Running
I0828 16:53:17.268596 8351 system_pods.go:89] "kube-apiserver-addons-161312" [533f4348-327f-4995-8b03-e3b792d2cb4e] Running
I0828 16:53:17.268605 8351 system_pods.go:89] "kube-controller-manager-addons-161312" [0ac59ac9-39b6-474d-b926-ba33667a7ad3] Running
I0828 16:53:17.268610 8351 system_pods.go:89] "kube-ingress-dns-minikube" [82ef5f44-8529-4660-9df8-d1fd1e34055c] Running
I0828 16:53:17.268620 8351 system_pods.go:89] "kube-proxy-j6f7q" [df5e438a-974c-4830-943c-d4b8a0c301cb] Running
I0828 16:53:17.268627 8351 system_pods.go:89] "kube-scheduler-addons-161312" [2fb79932-e90a-44fc-831a-7f9b52a380bc] Running
I0828 16:53:17.268631 8351 system_pods.go:89] "metrics-server-84c5f94fbc-2gwmk" [dd1f5b27-27c7-4ddf-973e-855eb2bbbe37] Running
I0828 16:53:17.268635 8351 system_pods.go:89] "nvidia-device-plugin-daemonset-lbb78" [4b16be02-3cce-4ec1-9435-fabfc1c55ab7] Running
I0828 16:53:17.268645 8351 system_pods.go:89] "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
I0828 16:53:17.268648 8351 system_pods.go:89] "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
I0828 16:53:17.268655 8351 system_pods.go:89] "snapshot-controller-56fcc65765-h2qqv" [1a3268d8-8a1e-4024-a144-00c9e97e7db0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:53:17.268667 8351 system_pods.go:89] "snapshot-controller-56fcc65765-qvk9j" [572c18e2-432c-42ea-bec9-9ef3707837c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:53:17.268674 8351 system_pods.go:89] "storage-provisioner" [496c4e27-2d97-4e7f-acac-7e8dcd1adbc7] Running
I0828 16:53:17.268681 8351 system_pods.go:126] duration metric: took 9.796624ms to wait for k8s-apps to be running ...
I0828 16:53:17.268692 8351 system_svc.go:44] waiting for kubelet service to be running ....
I0828 16:53:17.268747 8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0828 16:53:17.284186 8351 system_svc.go:56] duration metric: took 15.485363ms WaitForService to wait for kubelet
I0828 16:53:17.284213 8351 kubeadm.go:582] duration metric: took 42.81899845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0828 16:53:17.284237 8351 node_conditions.go:102] verifying NodePressure condition ...
I0828 16:53:17.287606 8351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0828 16:53:17.287638 8351 node_conditions.go:123] node cpu capacity is 2
I0828 16:53:17.287651 8351 node_conditions.go:105] duration metric: took 3.408559ms to run NodePressure ...
I0828 16:53:17.287664 8351 start.go:241] waiting for startup goroutines ...
I0828 16:53:17.426348 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:17.565229 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:17.925386 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:18.069424 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:18.424850 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:18.565127 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:18.925597 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:19.064917 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:19.429622 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:19.565863 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:19.925114 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:20.066614 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:20.424504 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:20.565522 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:20.925956 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:21.065315 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:21.427356 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:21.565691 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:21.924561 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:22.066152 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:22.437939 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:22.564134 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:22.924874 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:23.065725 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:23.425030 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:23.566025 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:23.925194 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:24.065469 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:24.424466 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:24.564838 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:24.924830 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:25.065037 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:25.424584 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:25.564930 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:25.925509 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:26.067721 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:26.425255 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:26.566026 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:26.925492 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:27.068214 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:27.424827 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:27.564486 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:27.925498 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:28.065274 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:28.426072 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:28.574314 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:28.926543 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:29.065899 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:29.427085 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:29.565783 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:29.962337 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:30.076291 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:30.425780 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:30.564077 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:30.924961 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:31.066044 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:31.425714 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:31.564433 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:31.925422 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:32.065417 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:32.424498 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:32.566587 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:32.925412 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:33.066095 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:33.424447 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:33.565281 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:33.924123 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:34.064732 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:34.424788 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:34.564761 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:34.926354 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:35.079103 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:35.424549 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:35.564178 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:35.924975 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:36.065260 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:36.424592 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:36.566752 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:36.936069 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:37.067212 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:37.424907 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:37.565670 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:37.924461 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:38.075045 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:38.425429 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:38.565830 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:38.925111 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:39.065423 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:39.425010 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:39.564672 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:39.925212 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:40.077708 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:40.425213 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:40.565414 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:40.925348 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:41.065247 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:41.424605 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:41.565102 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:41.925122 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:42.065296 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:42.427570 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:42.569391 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:42.925182 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:43.065035 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:43.424563 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:43.568650 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:43.925475 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:44.065146 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:44.424656 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:44.570779 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:44.924539 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:45.113566 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:45.425000 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:45.565027 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:53:45.926335 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:46.065689 8351 kapi.go:107] duration metric: took 57.506016712s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0828 16:53:46.424499 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:46.924736 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:47.424809 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:47.925324 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:48.424426 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:48.925122 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:49.424827 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:49.925249 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:50.425214 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:50.925124 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:51.424553 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:51.925908 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:52.425991 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:52.924617 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:53.424636 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:53.925414 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:54.424340 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:54.925465 8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0828 16:53:55.432894 8351 kapi.go:107] duration metric: took 1m10.512581055s to wait for app.kubernetes.io/name=ingress-nginx ...
I0828 16:54:14.271015 8351 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0828 16:54:14.271040 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:14.770934 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:15.270657 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:15.770619 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:16.271073 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:16.770213 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:17.270109 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:17.771627 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:18.271228 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:18.771266 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:19.270891 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:19.771065 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:20.271254 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:20.770988 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:21.271082 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:21.771337 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:22.271744 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:22.771128 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:23.270985 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:23.770705 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:24.270839 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:24.771030 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:25.270484 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:25.769927 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:26.270624 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:26.770680 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:27.275993 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:27.770055 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:28.271535 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:28.771763 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:29.270245 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:29.771328 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:30.272637 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:30.771018 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:31.271037 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:31.770738 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:32.272701 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:32.770464 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:33.271096 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:33.770694 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:34.270555 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:34.770887 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:35.270858 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:35.770171 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:36.269922 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:36.770380 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:37.271058 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:37.770051 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:38.271444 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:38.770395 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:39.271141 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:39.770574 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:40.271497 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:40.769970 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:41.270269 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:41.771241 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:42.271710 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:42.771175 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:43.276584 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:43.770552 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:44.270789 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:44.770673 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:45.271118 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:45.771804 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:46.272275 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:46.770291 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:47.270337 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:47.770978 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:48.271625 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:48.770602 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:49.270867 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:49.771217 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:50.271394 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:50.771365 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:51.271500 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:51.770299 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:52.271233 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:52.770662 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:53.270165 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:53.786109 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:54.271323 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:54.770652 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:55.270670 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:55.770342 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:56.270861 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:56.770518 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:57.271354 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:57.771810 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:58.271086 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:58.770943 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:59.270438 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:54:59.770554 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:00.304075 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:00.770285 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:01.276314 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:01.770820 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:02.270041 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:02.771373 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:03.271826 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:03.770363 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:04.271220 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:04.771412 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:05.270004 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:05.770773 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:06.270482 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:06.771067 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:07.270757 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:07.770988 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:08.269935 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:08.770910 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:09.270847 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:09.770675 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:10.270674 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:10.771374 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:11.271396 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:11.769817 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:12.270935 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:12.770837 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:13.270856 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:13.770848 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:14.271588 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:14.771189 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:15.271088 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:15.770881 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:16.271587 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:16.770739 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:17.270361 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:17.770909 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:18.270751 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:18.771635 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:19.270900 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:19.771436 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:20.272220 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:20.771423 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:21.270782 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:21.772013 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:22.270889 8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:55:22.770673 8351 kapi.go:107] duration metric: took 2m32.003695392s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0828 16:55:22.772724 8351 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-161312 cluster.
I0828 16:55:22.774548 8351 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0828 16:55:22.776166 8351 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0828 16:55:22.777784 8351 out.go:177] * Enabled addons: metrics-server, cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, volcano, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0828 16:55:22.779689 8351 addons.go:510] duration metric: took 2m48.314097913s for enable addons: enabled=[metrics-server cloud-spanner default-storageclass storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher volcano inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0828 16:55:22.779748 8351 start.go:246] waiting for cluster config update ...
I0828 16:55:22.779770 8351 start.go:255] writing updated cluster config ...
I0828 16:55:22.780072 8351 ssh_runner.go:195] Run: rm -f paused
I0828 16:55:23.111628 8351 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0828 16:55:23.113812 8351 out.go:177] * Done! kubectl is now configured to use "addons-161312" cluster and "default" namespace by default
==> Docker <==
Aug 28 17:05:01 addons-161312 dockerd[1280]: time="2024-08-28T17:05:01.593510585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:05:01 addons-161312 dockerd[1280]: time="2024-08-28T17:05:01.596549935Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.508620296Z" level=info msg="ignoring event" container=f4d0621886e0ae872e0b13263f09c5ab4baec0f18294fa8e59590a8653f671a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.539506643Z" level=info msg="ignoring event" container=7b3d4772f5476d3a16eca71b0da5fd97041d8dc90a4b4d96e4577ea2d13cd583 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.688365767Z" level=info msg="ignoring event" container=e3b2e60f39e61be2535dff6db43b9699f213a698ba34a736b9606ef36427950a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.705271570Z" level=info msg="ignoring event" container=6cb12e876898070ee96bba67ec6163bde92256fa667356782999fb767fa74894 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:11 addons-161312 dockerd[1280]: time="2024-08-28T17:05:11.126444143Z" level=info msg="ignoring event" container=548e9d0494fb0064b7c5c24121148f52647c9d00dc9631767d4812a71fcf5566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:11 addons-161312 dockerd[1280]: time="2024-08-28T17:05:11.298454385Z" level=info msg="ignoring event" container=7ed2932e3a60211057436e76f31d8faa9fabb4e813b9109bff5db275d79017e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:12 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a01e9f709c2810f785523176ef27c88f4235f7cc39ae11b5d0021bacdcf85d84/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Aug 28 17:05:12 addons-161312 dockerd[1280]: time="2024-08-28T17:05:12.214736315Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Aug 28 17:05:12 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:12Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Aug 28 17:05:12 addons-161312 dockerd[1280]: time="2024-08-28T17:05:12.961092036Z" level=info msg="ignoring event" container=29de7d5e4fc3d4d7e854a18bbd1002acdd55e45be797bf81c7bff49340088097 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:14 addons-161312 dockerd[1280]: time="2024-08-28T17:05:14.210746428Z" level=info msg="ignoring event" container=a01e9f709c2810f785523176ef27c88f4235f7cc39ae11b5d0021bacdcf85d84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:15 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/913bdb5f3117e54c5d04f1834c7cbd3ede2da7d18ccc6da80401aeed4f60f233/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Aug 28 17:05:16 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:16Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
Aug 28 17:05:16 addons-161312 dockerd[1280]: time="2024-08-28T17:05:16.889671118Z" level=info msg="ignoring event" container=7718a307feb581b03178ef8fdabe2a9c50a97b8e11cff86d684a347108a265a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:18 addons-161312 dockerd[1280]: time="2024-08-28T17:05:18.289756455Z" level=info msg="ignoring event" container=be486ce9070525a402077008744dc9ca35a7f3c70e26e3833a0ba1507ba134b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:18 addons-161312 dockerd[1280]: time="2024-08-28T17:05:18.413992376Z" level=info msg="ignoring event" container=913bdb5f3117e54c5d04f1834c7cbd3ede2da7d18ccc6da80401aeed4f60f233 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.027249394Z" level=info msg="ignoring event" container=99036035e792b95f9cd5d7a982905f48606ae1eee0c1814210d9dc9a31f994db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.142688360Z" level=info msg="ignoring event" container=a69dc079b82a7807dcec21d632cabad0231019793363657acd1d80e02c11f849 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.383470730Z" level=info msg="ignoring event" container=56543a0b8482e6f3cc3321b83bbe45bbf71e971da9f7df0854751f45b48bae2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:19 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-8svd4_kube-system\": unexpected command output nsenter: cannot open /proc/3644/ns/net: No such file or directory\n with error: exit status 1"
Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.691463939Z" level=info msg="ignoring event" container=8b27242d5b4eee6de4916df367071864e5b75848fe8b35437be8aae2ef824fb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:05:20 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a093c9115a3588705897d0d0e40d4d99d3e86bf9c8b5bae061d9507b42b321dc/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Aug 28 17:05:20 addons-161312 dockerd[1280]: time="2024-08-28T17:05:20.522512690Z" level=info msg="ignoring event" container=8432ee7d1c481b56a1686a6e4190a9fd133782b412d9ec1d822220f75324796c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
8432ee7d1c481 fc9db2894f4e4 Less than a second ago Exited helper-pod 0 a093c9115a358 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
29de7d5e4fc3d busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 8 seconds ago Exited helper-pod 0 a01e9f709c281 helper-pod-create-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
0c8e8e8da0b8a ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc 49 seconds ago Exited gadget 7 142c22ab6df73 gadget-ml8j2
1554916b100cf gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 2a2572df2af14 gcp-auth-89d5ffd79-cmxxh
cd0483b058900 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 f863098cd56a0 ingress-nginx-controller-bc57996ff-xfz6v
00b7abf90d0c9 420193b27261a 11 minutes ago Exited patch 1 5fdb0ece2aa34 ingress-nginx-admission-patch-vlb58
adc70c6243974 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 495a97876574a ingress-nginx-admission-create-klgcd
68940392a1c43 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 81e0a79b34fc6 local-path-provisioner-86d989889c-dsnx5
4d66551822250 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 3d6b786a107f1 metrics-server-84c5f94fbc-2gwmk
5fbb650a6d9a1 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 b389c2e73075a cloud-spanner-emulator-769b77f747-8spwt
044e1b7fedf76 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 5882e8babd681 kube-ingress-dns-minikube
43100bccad9bb ba04bb24b9575 12 minutes ago Running storage-provisioner 0 92bfd727a018c storage-provisioner
e4d8a13c0eca6 2437cf7621777 12 minutes ago Running coredns 0 a3a3ff086dd15 coredns-6f6b679f8f-hcl4z
9818f836df069 71d55d66fd4ee 12 minutes ago Running kube-proxy 0 027916b115c06 kube-proxy-j6f7q
4e27c6ac85dc0 fcb0683e6bdbd 12 minutes ago Running kube-controller-manager 0 d1c04a6a9ffe3 kube-controller-manager-addons-161312
ae1dc3c789881 fbbbd428abb4d 12 minutes ago Running kube-scheduler 0 c7757778c6d63 kube-scheduler-addons-161312
71776afd39911 27e3830e14027 12 minutes ago Running etcd 0 3c35e2fa9c0ad etcd-addons-161312
a19a095ebea3e cd0f0ae0ec9e0 12 minutes ago Running kube-apiserver 0 5bda477f20731 kube-apiserver-addons-161312
==> controller_ingress [cd0483b05890] <==
W0828 16:53:55.163724 7 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0828 16:53:55.164110 7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0828 16:53:55.179502 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
I0828 16:53:55.733876 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0828 16:53:55.763286 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0828 16:53:55.777080 7 nginx.go:271] "Starting NGINX Ingress controller"
I0828 16:53:55.795784 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f38accfa-91e9-43ae-b242-cbccb64c4b02", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0828 16:53:55.802167 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"aeec0a62-3ec6-4ae6-8077-709c338fac49", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0828 16:53:55.802638 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"95fadb9b-3723-4373-8e34-fd0f2383e603", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0828 16:53:56.978480 7 nginx.go:317] "Starting NGINX process"
I0828 16:53:56.978675 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0828 16:53:56.979193 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0828 16:53:56.979417 7 controller.go:193] "Configuration changes detected, backend reload required"
I0828 16:53:56.996986 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0828 16:53:56.997126 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xfz6v"
I0828 16:53:57.007039 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xfz6v" node="addons-161312"
I0828 16:53:57.034221 7 controller.go:213] "Backend successfully reloaded"
I0828 16:53:57.034513 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0828 16:53:57.034634 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xfz6v", UID:"fbdde173-be80-4530-8596-b91cbae0540e", APIVersion:"v1", ResourceVersion:"1230", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
==> coredns [e4d8a13c0eca] <==
[INFO] 10.244.0.7:37172 - 51637 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096981s
[INFO] 10.244.0.7:40454 - 60681 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002620176s
[INFO] 10.244.0.7:40454 - 32523 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002598778s
[INFO] 10.244.0.7:48561 - 4416 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00016453s
[INFO] 10.244.0.7:48561 - 44614 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095446s
[INFO] 10.244.0.7:33359 - 29396 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085313s
[INFO] 10.244.0.7:33359 - 40667 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047456s
[INFO] 10.244.0.7:57900 - 24274 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043846s
[INFO] 10.244.0.7:57900 - 52175 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003437s
[INFO] 10.244.0.7:34971 - 40716 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004502s
[INFO] 10.244.0.7:34971 - 3634 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036289s
[INFO] 10.244.0.7:46595 - 65471 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001840335s
[INFO] 10.244.0.7:46595 - 63933 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001700821s
[INFO] 10.244.0.7:51720 - 17491 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000055998s
[INFO] 10.244.0.7:51720 - 63313 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062364s
[INFO] 10.244.0.25:32917 - 37251 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000383306s
[INFO] 10.244.0.25:41148 - 22610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000323707s
[INFO] 10.244.0.25:53401 - 30384 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152069s
[INFO] 10.244.0.25:37595 - 1848 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141304s
[INFO] 10.244.0.25:47820 - 14472 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127339s
[INFO] 10.244.0.25:33809 - 57194 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108796s
[INFO] 10.244.0.25:58074 - 19171 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0027328s
[INFO] 10.244.0.25:36644 - 3267 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002977116s
[INFO] 10.244.0.25:37481 - 4397 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002412286s
[INFO] 10.244.0.25:43196 - 43493 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002487721s
==> describe nodes <==
Name: addons-161312
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-161312
kubernetes.io/os=linux
minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
minikube.k8s.io/name=addons-161312
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_28T16_52_30_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-161312
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 28 Aug 2024 16:52:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-161312
AcquireTime: <unset>
RenewTime: Wed, 28 Aug 2024 17:05:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 28 Aug 2024 17:01:10 +0000 Wed, 28 Aug 2024 16:52:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 28 Aug 2024 17:01:10 +0000 Wed, 28 Aug 2024 16:52:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 28 Aug 2024 17:01:10 +0000 Wed, 28 Aug 2024 16:52:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 28 Aug 2024 17:01:10 +0000 Wed, 28 Aug 2024 16:52:27 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-161312
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
System Info:
Machine ID: 192e84d6552845128e8e3999ee1f3130
System UUID: 65678d4a-b43f-4c7c-940d-443e3c36e38e
Boot ID: 4e364349-6d08-4a99-bc76-4bf6d585326a
Kernel Version: 5.15.0-1068-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.1.2
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default cloud-spanner-emulator-769b77f747-8spwt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-ml8j2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-cmxxh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-xfz6v 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-6f6b679f8f-hcl4z 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-161312 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-161312 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-161312 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-j6f7q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-161312 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-2gwmk 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
local-path-storage local-path-provisioner-86d989889c-dsnx5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-161312 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-161312 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-161312 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-161312 event: Registered Node addons-161312 in Controller
==> dmesg <==
[Aug28 16:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.016366] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.491983] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.065955] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.002669] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.018580] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.005263] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.003912] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.767668] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.821413] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [71776afd3991] <==
{"level":"info","ts":"2024-08-28T16:52:22.578046Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-28T16:52:22.578060Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-28T16:52:22.647690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-08-28T16:52:22.647732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-08-28T16:52:22.647755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-08-28T16:52:22.647775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-08-28T16:52:22.647781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-08-28T16:52:22.647791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-08-28T16:52:22.647799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-08-28T16:52:22.651515Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-161312 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-28T16:52:22.651670Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-28T16:52:22.651956Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:22.652042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-28T16:52:22.652153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-28T16:52:22.652181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-28T16:52:22.652791Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-28T16:52:22.653673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-08-28T16:52:22.654271Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-28T16:52:22.655055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-28T16:52:22.655122Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:22.655192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:22.655211Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T17:02:24.061850Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1842}
{"level":"info","ts":"2024-08-28T17:02:24.107504Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1842,"took":"45.087181ms","hash":3577647896,"current-db-size-bytes":9084928,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4907008,"current-db-size-in-use":"4.9 MB"}
{"level":"info","ts":"2024-08-28T17:02:24.107554Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3577647896,"revision":1842,"compact-revision":-1}
==> gcp-auth [1554916b100c] <==
2024/08/28 16:55:21 GCP Auth Webhook started!
2024/08/28 16:55:39 Ready to marshal response ...
2024/08/28 16:55:39 Ready to write response ...
2024/08/28 16:55:40 Ready to marshal response ...
2024/08/28 16:55:40 Ready to write response ...
2024/08/28 16:56:04 Ready to marshal response ...
2024/08/28 16:56:04 Ready to write response ...
2024/08/28 16:56:04 Ready to marshal response ...
2024/08/28 16:56:04 Ready to write response ...
2024/08/28 16:56:04 Ready to marshal response ...
2024/08/28 16:56:04 Ready to write response ...
2024/08/28 17:04:18 Ready to marshal response ...
2024/08/28 17:04:18 Ready to write response ...
2024/08/28 17:04:30 Ready to marshal response ...
2024/08/28 17:04:30 Ready to write response ...
2024/08/28 17:04:48 Ready to marshal response ...
2024/08/28 17:04:48 Ready to write response ...
2024/08/28 17:05:11 Ready to marshal response ...
2024/08/28 17:05:11 Ready to write response ...
2024/08/28 17:05:11 Ready to marshal response ...
2024/08/28 17:05:11 Ready to write response ...
2024/08/28 17:05:19 Ready to marshal response ...
2024/08/28 17:05:19 Ready to write response ...
==> kernel <==
17:05:21 up 47 min, 0 users, load average: 1.30, 1.17, 0.96
Linux addons-161312 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [a19a095ebea3] <==
I0828 16:55:55.093054 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0828 16:55:55.405417 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0828 16:55:55.450727 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0828 16:55:55.594003 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0828 16:55:55.740262 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0828 16:55:56.071480 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0828 16:55:56.094489 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0828 16:55:56.137551 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0828 16:55:56.210788 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0828 16:55:56.655661 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0828 16:55:56.799629 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0828 17:04:38.211417 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0828 17:05:04.281089 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0828 17:05:04.281996 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0828 17:05:04.311523 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0828 17:05:04.311746 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0828 17:05:04.334194 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0828 17:05:04.334252 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0828 17:05:04.344245 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0828 17:05:04.344483 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0828 17:05:04.389252 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0828 17:05:04.389735 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0828 17:05:05.343828 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0828 17:05:05.389586 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0828 17:05:05.402394 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [4e27c6ac85dc] <==
E0828 17:05:06.599540 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:06.942912 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:06.942953 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:08.106022 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:08.106072 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:08.729795 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:08.729842 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:09.204500 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:09.204547 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:11.729194 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:11.729238 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:12.248490 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:12.248593 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:12.550038 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:12.550086 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:12.643435 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:12.643484 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:14.120937 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:14.120985 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:05:14.891740 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:14.892057 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0828 17:05:18.917220 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="4.513µs"
W0828 17:05:20.183660 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:05:20.183704 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0828 17:05:20.585217 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="5.169µs"
==> kube-proxy [9818f836df06] <==
I0828 16:52:35.667755 1 server_linux.go:66] "Using iptables proxy"
I0828 16:52:35.796830 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0828 16:52:35.797031 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0828 16:52:35.832021 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0828 16:52:35.832076 1 server_linux.go:169] "Using iptables Proxier"
I0828 16:52:35.836909 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0828 16:52:35.840586 1 server.go:483] "Version info" version="v1.31.0"
I0828 16:52:35.840627 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0828 16:52:35.862133 1 config.go:197] "Starting service config controller"
I0828 16:52:35.862196 1 shared_informer.go:313] Waiting for caches to sync for service config
I0828 16:52:35.862265 1 config.go:104] "Starting endpoint slice config controller"
I0828 16:52:35.862271 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0828 16:52:35.864610 1 config.go:326] "Starting node config controller"
I0828 16:52:35.864625 1 shared_informer.go:313] Waiting for caches to sync for node config
I0828 16:52:35.963283 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0828 16:52:35.963560 1 shared_informer.go:320] Caches are synced for service config
I0828 16:52:35.964948 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [ae1dc3c78988] <==
I0828 16:52:26.548771 1 serving.go:386] Generated self-signed cert in-memory
W0828 16:52:28.088874 1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0828 16:52:28.088911 1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0828 16:52:28.088922 1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
W0828 16:52:28.088930 1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0828 16:52:28.111220 1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
I0828 16:52:28.111491 1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0828 16:52:28.114120 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0828 16:52:28.114374 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0828 16:52:28.114942 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0828 16:52:28.115104 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
W0828 16:52:28.117491 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0828 16:52:28.117732 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0828 16:52:29.314812 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.600473 2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641ea3b4-9444-43cc-88b0-461a677bd1a7-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7" (OuterVolumeSpecName: "data") pod "641ea3b4-9444-43cc-88b0-461a677bd1a7" (UID: "641ea3b4-9444-43cc-88b0-461a677bd1a7"). InnerVolumeSpecName "pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.602445 2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641ea3b4-9444-43cc-88b0-461a677bd1a7-kube-api-access-zmn7c" (OuterVolumeSpecName: "kube-api-access-zmn7c") pod "641ea3b4-9444-43cc-88b0-461a677bd1a7" (UID: "641ea3b4-9444-43cc-88b0-461a677bd1a7"). InnerVolumeSpecName "kube-api-access-zmn7c". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.700873 2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zmn7c\" (UniqueName: \"kubernetes.io/projected/641ea3b4-9444-43cc-88b0-461a677bd1a7-kube-api-access-zmn7c\") on node \"addons-161312\" DevicePath \"\""
Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.700917 2324 reconciler_common.go:288] "Volume detached for volume \"pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UniqueName: \"kubernetes.io/host-path/641ea3b4-9444-43cc-88b0-461a677bd1a7-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\") on node \"addons-161312\" DevicePath \"\""
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.425928 2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f982f0e-567d-48a5-b66c-c6bf898fc4b7" path="/var/lib/kubelet/pods/3f982f0e-567d-48a5-b66c-c6bf898fc4b7/volumes"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.426374 2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" path="/var/lib/kubelet/pods/641ea3b4-9444-43cc-88b0-461a677bd1a7/volumes"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.522460 2324 scope.go:117] "RemoveContainer" containerID="99036035e792b95f9cd5d7a982905f48606ae1eee0c1814210d9dc9a31f994db"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.617981 2324 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvlm2\" (UniqueName: \"kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2\") pod \"c7dd58ff-e9b5-4511-9a22-023705b9fdfe\" (UID: \"c7dd58ff-e9b5-4511-9a22-023705b9fdfe\") "
Aug 28 17:05:19 addons-161312 kubelet[2324]: E0828 17:05:19.622482 2324 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" containerName="busybox"
Aug 28 17:05:19 addons-161312 kubelet[2324]: E0828 17:05:19.622513 2324 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" containerName="registry"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.622553 2324 memory_manager.go:354] "RemoveStaleState removing state" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" containerName="busybox"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.622563 2324 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" containerName="registry"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.624671 2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2" (OuterVolumeSpecName: "kube-api-access-dvlm2") pod "c7dd58ff-e9b5-4511-9a22-023705b9fdfe" (UID: "c7dd58ff-e9b5-4511-9a22-023705b9fdfe"). InnerVolumeSpecName "kube-api-access-dvlm2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.625565 2324 scope.go:117] "RemoveContainer" containerID="7718a307feb581b03178ef8fdabe2a9c50a97b8e11cff86d684a347108a265a7"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718625 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-gcp-creds\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718727 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5qh\" (UniqueName: \"kubernetes.io/projected/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-kube-api-access-6p5qh\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718794 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-script\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718838 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-data\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718901 2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dvlm2\" (UniqueName: \"kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2\") on node \"addons-161312\" DevicePath \"\""
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.920424 2324 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984v9\" (UniqueName: \"kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9\") pod \"c0749a82-4329-4dc6-92f9-0bd490e250bc\" (UID: \"c0749a82-4329-4dc6-92f9-0bd490e250bc\") "
Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.922576 2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9" (OuterVolumeSpecName: "kube-api-access-984v9") pod "c0749a82-4329-4dc6-92f9-0bd490e250bc" (UID: "c0749a82-4329-4dc6-92f9-0bd490e250bc"). InnerVolumeSpecName "kube-api-access-984v9". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:05:20 addons-161312 kubelet[2324]: I0828 17:05:20.023212 2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-984v9\" (UniqueName: \"kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9\") on node \"addons-161312\" DevicePath \"\""
Aug 28 17:05:20 addons-161312 kubelet[2324]: I0828 17:05:20.714887 2324 scope.go:117] "RemoveContainer" containerID="a69dc079b82a7807dcec21d632cabad0231019793363657acd1d80e02c11f849"
Aug 28 17:05:21 addons-161312 kubelet[2324]: I0828 17:05:21.395592 2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0749a82-4329-4dc6-92f9-0bd490e250bc" path="/var/lib/kubelet/pods/c0749a82-4329-4dc6-92f9-0bd490e250bc/volumes"
Aug 28 17:05:21 addons-161312 kubelet[2324]: I0828 17:05:21.396059 2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" path="/var/lib/kubelet/pods/c7dd58ff-e9b5-4511-9a22-023705b9fdfe/volumes"
==> storage-provisioner [43100bccad9b] <==
I0828 16:52:43.132130 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0828 16:52:43.160672 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0828 16:52:43.160726 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0828 16:52:43.182372 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0828 16:52:43.184829 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776!
I0828 16:52:43.185360 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"404dc3dd-1469-4d58-b8ff-a40d2e3414ce", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776 became leader
I0828 16:52:43.285010 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-161312 -n addons-161312
helpers_test.go:261: (dbg) Run: kubectl --context addons-161312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7: exit status 1 (101.649814ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-161312/192.168.49.2
Start Time: Wed, 28 Aug 2024 16:56:04 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l5jhw (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-l5jhw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m17s default-scheduler Successfully assigned default/busybox to addons-161312
Normal Pulling 7m54s (x4 over 9m17s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m54s (x4 over 9m17s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m54s (x4 over 9m17s) kubelet Error: ErrImagePull
Warning Failed 7m26s (x6 over 9m16s) kubelet Error: ImagePullBackOff
Normal BackOff 4m3s (x21 over 9m16s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-klgcd" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-vlb58" not found
Error from server (NotFound): pods "helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.63s)