=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.463094ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006190148s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004741338s
addons_test.go:342: (dbg) Run: kubectl --context addons-731605 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.114054061s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-731605 ip
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-731605 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-731605
helpers_test.go:235: (dbg) docker inspect addons-731605:
-- stdout --
[
{
"Id": "e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62",
"Created": "2024-09-17T16:56:21.054930089Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8820,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-17T16:56:21.228982647Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
"ResolvConfPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/hostname",
"HostsPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/hosts",
"LogPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62-json.log",
"Name": "/addons-731605",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-731605:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-731605",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911-init/diff:/var/lib/docker/overlay2/661d29c6509a75bb24f7ab0157c48263e53b9e4426011b7a7b71a55adee7d7b7/diff",
"MergedDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/merged",
"UpperDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/diff",
"WorkDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-731605",
"Source": "/var/lib/docker/volumes/addons-731605/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-731605",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-731605",
"name.minikube.sigs.k8s.io": "addons-731605",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5a36375090e0a8067406466fa321e9b2daabbf67ac5628f22d28883325fc6b84",
"SandboxKey": "/var/run/docker/netns/5a36375090e0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-731605": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "4e1264060639f1108fea50fdf8b216f0e6b32a99ca56b1ad2099317731b4a5b0",
"EndpointID": "d8f62c04227b62c91a980f4712453ef6cf32f2a9383aa293d115fdff002c4592",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-731605",
"e9b97591b363"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-731605 -n addons-731605
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-731605 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 logs -n 25: (1.411554742s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | -p download-only-017300 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| delete | -p download-only-017300 | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| start | -o=json --download-only | download-only-253478 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | -p download-only-253478 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| delete | -p download-only-253478 | download-only-253478 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| delete | -p download-only-017300 | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| delete | -p download-only-253478 | download-only-253478 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| start | --download-only -p | download-docker-449671 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | download-docker-449671 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-449671 | download-docker-449671 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| start | --download-only -p | binary-mirror-460466 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | binary-mirror-460466 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:37897 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-460466 | binary-mirror-460466 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
| addons | disable dashboard -p | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | addons-731605 | | | | | |
| addons | enable dashboard -p | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | |
| | addons-731605 | | | | | |
| start | -p addons-731605 --wait=true | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:59 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-731605 addons disable | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:00 UTC | 17 Sep 24 17:00 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-731605 addons disable | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-731605 addons | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-731605 addons | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | -p addons-731605 | | | | | |
| ip | addons-731605 ip | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| addons | addons-731605 addons disable | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ssh | addons-731605 ssh cat | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | /opt/local-path-provisioner/pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4_default_test-pvc/file1 | | | | | |
| addons | addons-731605 addons disable | addons-731605 | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/17 16:55:56
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0917 16:55:56.261150 8324 out.go:345] Setting OutFile to fd 1 ...
I0917 16:55:56.261363 8324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 16:55:56.261390 8324 out.go:358] Setting ErrFile to fd 2...
I0917 16:55:56.261412 8324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 16:55:56.261693 8324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 16:55:56.262221 8324 out.go:352] Setting JSON to false
I0917 16:55:56.263067 8324 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2303,"bootTime":1726589854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0917 16:55:56.263171 8324 start.go:139] virtualization:
I0917 16:55:56.266206 8324 out.go:177] * [addons-731605] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0917 16:55:56.268807 8324 out.go:177] - MINIKUBE_LOCATION=19662
I0917 16:55:56.268859 8324 notify.go:220] Checking for updates...
I0917 16:55:56.273096 8324 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0917 16:55:56.275593 8324 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
I0917 16:55:56.277853 8324 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
I0917 16:55:56.279816 8324 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0917 16:55:56.281828 8324 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0917 16:55:56.284349 8324 driver.go:394] Setting default libvirt URI to qemu:///system
I0917 16:55:56.306305 8324 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0917 16:55:56.306436 8324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0917 16:55:56.369425 8324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:56.359203789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0917 16:55:56.369541 8324 docker.go:318] overlay module found
I0917 16:55:56.373244 8324 out.go:177] * Using the docker driver based on user configuration
I0917 16:55:56.375154 8324 start.go:297] selected driver: docker
I0917 16:55:56.375171 8324 start.go:901] validating driver "docker" against <nil>
I0917 16:55:56.375186 8324 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0917 16:55:56.375876 8324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0917 16:55:56.430364 8324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:56.42070162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0917 16:55:56.430574 8324 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0917 16:55:56.430811 8324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0917 16:55:56.432904 8324 out.go:177] * Using Docker driver with root privileges
I0917 16:55:56.434914 8324 cni.go:84] Creating CNI manager for ""
I0917 16:55:56.434980 8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0917 16:55:56.434994 8324 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0917 16:55:56.435064 8324 start.go:340] cluster config:
{Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0917 16:55:56.437716 8324 out.go:177] * Starting "addons-731605" primary control-plane node in "addons-731605" cluster
I0917 16:55:56.439884 8324 cache.go:121] Beginning downloading kic base image for docker with docker
I0917 16:55:56.442514 8324 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
I0917 16:55:56.444890 8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0917 16:55:56.444945 8324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0917 16:55:56.444956 8324 cache.go:56] Caching tarball of preloaded images
I0917 16:55:56.444990 8324 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
I0917 16:55:56.445058 8324 preload.go:172] Found /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0917 16:55:56.445070 8324 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0917 16:55:56.445482 8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json ...
I0917 16:55:56.445514 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json: {Name:mkcd6dda44a0dbe49e232a889ca4c689e63d6c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:55:56.460939 8324 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
I0917 16:55:56.461075 8324 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
I0917 16:55:56.461099 8324 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
I0917 16:55:56.461109 8324 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
I0917 16:55:56.461117 8324 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
I0917 16:55:56.461123 8324 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
I0917 16:56:14.134456 8324 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
I0917 16:56:14.134495 8324 cache.go:194] Successfully downloaded all kic artifacts
I0917 16:56:14.134532 8324 start.go:360] acquireMachinesLock for addons-731605: {Name:mk85601fc5fe208ad3ac2f2740b3e068a6bf1f0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 16:56:14.134659 8324 start.go:364] duration metric: took 106.846µs to acquireMachinesLock for "addons-731605"
I0917 16:56:14.134705 8324 start.go:93] Provisioning new machine with config: &{Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0917 16:56:14.134788 8324 start.go:125] createHost starting for "" (driver="docker")
I0917 16:56:14.137281 8324 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0917 16:56:14.137594 8324 start.go:159] libmachine.API.Create for "addons-731605" (driver="docker")
I0917 16:56:14.137631 8324 client.go:168] LocalClient.Create starting
I0917 16:56:14.137760 8324 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem
I0917 16:56:14.646320 8324 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem
I0917 16:56:14.922390 8324 cli_runner.go:164] Run: docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0917 16:56:14.940931 8324 cli_runner.go:211] docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0917 16:56:14.941018 8324 network_create.go:284] running [docker network inspect addons-731605] to gather additional debugging logs...
I0917 16:56:14.941043 8324 cli_runner.go:164] Run: docker network inspect addons-731605
W0917 16:56:14.955493 8324 cli_runner.go:211] docker network inspect addons-731605 returned with exit code 1
I0917 16:56:14.955524 8324 network_create.go:287] error running [docker network inspect addons-731605]: docker network inspect addons-731605: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-731605 not found
I0917 16:56:14.955543 8324 network_create.go:289] output of [docker network inspect addons-731605]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-731605 not found
** /stderr **
I0917 16:56:14.955654 8324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 16:56:14.972264 8324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c1c4b0}
I0917 16:56:14.972300 8324 network_create.go:124] attempt to create docker network addons-731605 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0917 16:56:14.972352 8324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-731605 addons-731605
I0917 16:56:15.062238 8324 network_create.go:108] docker network addons-731605 192.168.49.0/24 created
I0917 16:56:15.062278 8324 kic.go:121] calculated static IP "192.168.49.2" for the "addons-731605" container
I0917 16:56:15.062366 8324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0917 16:56:15.080021 8324 cli_runner.go:164] Run: docker volume create addons-731605 --label name.minikube.sigs.k8s.io=addons-731605 --label created_by.minikube.sigs.k8s.io=true
I0917 16:56:15.100611 8324 oci.go:103] Successfully created a docker volume addons-731605
I0917 16:56:15.100720 8324 cli_runner.go:164] Run: docker run --rm --name addons-731605-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --entrypoint /usr/bin/test -v addons-731605:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
I0917 16:56:17.217017 8324 cli_runner.go:217] Completed: docker run --rm --name addons-731605-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --entrypoint /usr/bin/test -v addons-731605:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.116255038s)
I0917 16:56:17.217043 8324 oci.go:107] Successfully prepared a docker volume addons-731605
I0917 16:56:17.217076 8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0917 16:56:17.217095 8324 kic.go:194] Starting extracting preloaded images to volume ...
I0917 16:56:17.217159 8324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-731605:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
I0917 16:56:20.983953 8324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-731605:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.76673548s)
I0917 16:56:20.983986 8324 kic.go:203] duration metric: took 3.766887634s to extract preloaded images to volume ...
W0917 16:56:20.984125 8324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0917 16:56:20.984246 8324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0917 16:56:21.039190 8324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-731605 --name addons-731605 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-731605 --network addons-731605 --ip 192.168.49.2 --volume addons-731605:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
I0917 16:56:21.411232 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Running}}
I0917 16:56:21.435156 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:21.457628 8324 cli_runner.go:164] Run: docker exec addons-731605 stat /var/lib/dpkg/alternatives/iptables
I0917 16:56:21.517937 8324 oci.go:144] the created container "addons-731605" has a running status.
I0917 16:56:21.517962 8324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa...
I0917 16:56:22.183649 8324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0917 16:56:22.210950 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:22.231854 8324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0917 16:56:22.231877 8324 kic_runner.go:114] Args: [docker exec --privileged addons-731605 chown docker:docker /home/docker/.ssh/authorized_keys]
I0917 16:56:22.311567 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:22.337021 8324 machine.go:93] provisionDockerMachine start ...
I0917 16:56:22.337170 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:22.360127 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:22.360382 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:22.360398 8324 main.go:141] libmachine: About to run SSH command:
hostname
I0917 16:56:22.507172 8324 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-731605
I0917 16:56:22.507254 8324 ubuntu.go:169] provisioning hostname "addons-731605"
I0917 16:56:22.507350 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:22.528932 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:22.529177 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:22.529189 8324 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-731605 && echo "addons-731605" | sudo tee /etc/hostname
I0917 16:56:22.691965 8324 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-731605
I0917 16:56:22.692110 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:22.709150 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:22.709392 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:22.709413 8324 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-731605' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-731605/g' /etc/hosts;
else
echo '127.0.1.1 addons-731605' | sudo tee -a /etc/hosts;
fi
fi
I0917 16:56:22.855764 8324 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0917 16:56:22.855792 8324 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19662-2253/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-2253/.minikube}
I0917 16:56:22.855820 8324 ubuntu.go:177] setting up certificates
I0917 16:56:22.855830 8324 provision.go:84] configureAuth start
I0917 16:56:22.855897 8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
I0917 16:56:22.873731 8324 provision.go:143] copyHostCerts
I0917 16:56:22.873830 8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/ca.pem (1078 bytes)
I0917 16:56:22.873960 8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/cert.pem (1123 bytes)
I0917 16:56:22.874035 8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/key.pem (1679 bytes)
I0917 16:56:22.874101 8324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem org=jenkins.addons-731605 san=[127.0.0.1 192.168.49.2 addons-731605 localhost minikube]
I0917 16:56:23.842473 8324 provision.go:177] copyRemoteCerts
I0917 16:56:23.842547 8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0917 16:56:23.842586 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:23.863868 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:23.968396 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0917 16:56:23.992842 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0917 16:56:24.017129 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0917 16:56:24.048636 8324 provision.go:87] duration metric: took 1.192786541s to configureAuth
I0917 16:56:24.048707 8324 ubuntu.go:193] setting minikube options for container-runtime
I0917 16:56:24.048930 8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 16:56:24.048994 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:24.071220 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:24.071465 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:24.071483 8324 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0917 16:56:24.220246 8324 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0917 16:56:24.220267 8324 ubuntu.go:71] root file system type: overlay
I0917 16:56:24.220399 8324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0917 16:56:24.220474 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:24.239242 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:24.239503 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:24.239595 8324 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0917 16:56:24.403638 8324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0917 16:56:24.403831 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:24.421172 8324 main.go:141] libmachine: Using SSH client type: native
I0917 16:56:24.421420 8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0917 16:56:24.421444 8324 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0917 16:56:25.220657 8324 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:36.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-17 16:56:24.398097158 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0917 16:56:25.220689 8324 machine.go:96] duration metric: took 2.883599615s to provisionDockerMachine
I0917 16:56:25.220700 8324 client.go:171] duration metric: took 11.083056604s to LocalClient.Create
I0917 16:56:25.220713 8324 start.go:167] duration metric: took 11.083120529s to libmachine.API.Create "addons-731605"
I0917 16:56:25.220720 8324 start.go:293] postStartSetup for "addons-731605" (driver="docker")
I0917 16:56:25.220731 8324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0917 16:56:25.220804 8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0917 16:56:25.220844 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:25.237541 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:25.336887 8324 ssh_runner.go:195] Run: cat /etc/os-release
I0917 16:56:25.340081 8324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0917 16:56:25.340119 8324 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0917 16:56:25.340149 8324 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0917 16:56:25.340161 8324 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0917 16:56:25.340173 8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-2253/.minikube/addons for local assets ...
I0917 16:56:25.340249 8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-2253/.minikube/files for local assets ...
I0917 16:56:25.340278 8324 start.go:296] duration metric: took 119.551153ms for postStartSetup
I0917 16:56:25.340602 8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
I0917 16:56:25.356953 8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json ...
I0917 16:56:25.357250 8324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0917 16:56:25.357299 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:25.373597 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:25.472150 8324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0917 16:56:25.476661 8324 start.go:128] duration metric: took 11.341853801s to createHost
I0917 16:56:25.476685 8324 start.go:83] releasing machines lock for "addons-731605", held for 11.342014791s
I0917 16:56:25.476756 8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
I0917 16:56:25.492799 8324 ssh_runner.go:195] Run: cat /version.json
I0917 16:56:25.492860 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:25.493117 8324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0917 16:56:25.493197 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:25.512350 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:25.514436 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:25.607729 8324 ssh_runner.go:195] Run: systemctl --version
I0917 16:56:25.734508 8324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0917 16:56:25.739190 8324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0917 16:56:25.769219 8324 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0917 16:56:25.769310 8324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0917 16:56:25.801320 8324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0917 16:56:25.801348 8324 start.go:495] detecting cgroup driver to use...
I0917 16:56:25.801387 8324 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0917 16:56:25.801494 8324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0917 16:56:25.818450 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0917 16:56:25.829039 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0917 16:56:25.839256 8324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0917 16:56:25.839380 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0917 16:56:25.850292 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0917 16:56:25.860194 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0917 16:56:25.871125 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0917 16:56:25.882105 8324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0917 16:56:25.892792 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0917 16:56:25.904790 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0917 16:56:25.916695 8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0917 16:56:25.927314 8324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0917 16:56:25.936517 8324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0917 16:56:25.945309 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:26.031256 8324 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0917 16:56:26.133657 8324 start.go:495] detecting cgroup driver to use...
I0917 16:56:26.133725 8324 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0917 16:56:26.133792 8324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0917 16:56:26.149696 8324 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0917 16:56:26.149769 8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0917 16:56:26.164520 8324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0917 16:56:26.185313 8324 ssh_runner.go:195] Run: which cri-dockerd
I0917 16:56:26.192150 8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0917 16:56:26.202924 8324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0917 16:56:26.221456 8324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0917 16:56:26.325669 8324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0917 16:56:26.421083 8324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0917 16:56:26.421223 8324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0917 16:56:26.443025 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:26.545707 8324 ssh_runner.go:195] Run: sudo systemctl restart docker
I0917 16:56:26.811676 8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0917 16:56:26.825180 8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0917 16:56:26.837988 8324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0917 16:56:26.936983 8324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0917 16:56:27.030049 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:27.124381 8324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0917 16:56:27.139229 8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0917 16:56:27.151339 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:27.242132 8324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0917 16:56:27.311721 8324 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0917 16:56:27.311903 8324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0917 16:56:27.315712 8324 start.go:563] Will wait 60s for crictl version
I0917 16:56:27.315828 8324 ssh_runner.go:195] Run: which crictl
I0917 16:56:27.319364 8324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0917 16:56:27.361507 8324 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0917 16:56:27.361630 8324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0917 16:56:27.388099 8324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0917 16:56:27.413051 8324 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0917 16:56:27.413195 8324 cli_runner.go:164] Run: docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 16:56:27.429040 8324 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0917 16:56:27.432667 8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0917 16:56:27.443400 8324 kubeadm.go:883] updating cluster {Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0917 16:56:27.443531 8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0917 16:56:27.443597 8324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0917 16:56:27.461547 8324 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0917 16:56:27.461568 8324 docker.go:615] Images already preloaded, skipping extraction
I0917 16:56:27.461654 8324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0917 16:56:27.479877 8324 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0917 16:56:27.479905 8324 cache_images.go:84] Images are preloaded, skipping loading
I0917 16:56:27.479915 8324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0917 16:56:27.480070 8324 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-731605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0917 16:56:27.480164 8324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0917 16:56:27.527379 8324 cni.go:84] Creating CNI manager for ""
I0917 16:56:27.527406 8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0917 16:56:27.527417 8324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0917 16:56:27.527436 8324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-731605 NodeName:addons-731605 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0917 16:56:27.527591 8324 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-731605"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0917 16:56:27.527656 8324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0917 16:56:27.536671 8324 binaries.go:44] Found k8s binaries, skipping transfer
I0917 16:56:27.536762 8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0917 16:56:27.545355 8324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0917 16:56:27.563173 8324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0917 16:56:27.581197 8324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0917 16:56:27.599295 8324 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0917 16:56:27.602762 8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0917 16:56:27.617026 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:27.724012 8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0917 16:56:27.739858 8324 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605 for IP: 192.168.49.2
I0917 16:56:27.739894 8324 certs.go:194] generating shared ca certs ...
I0917 16:56:27.739911 8324 certs.go:226] acquiring lock for ca certs: {Name:mk4233cd6d22b902eb1a88fa3630e0f93cf4a1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:27.740052 8324 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key
I0917 16:56:28.064797 8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt ...
I0917 16:56:28.064835 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt: {Name:mkbd95fc9c74a7f92bfad573aaef04d265ffc139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.065046 8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key ...
I0917 16:56:28.065062 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key: {Name:mkc14a3e90bb0aeeb1c8d549d47c375b4aa84049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.065144 8324 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key
I0917 16:56:28.458745 8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt ...
I0917 16:56:28.458776 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt: {Name:mkf3c2cc5824ee644132fc4e707eed238ff55f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.458964 8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key ...
I0917 16:56:28.458977 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key: {Name:mke83ecaf52839c5ac8737034844966e5e358406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.459061 8324 certs.go:256] generating profile certs ...
I0917 16:56:28.459127 8324 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key
I0917 16:56:28.459143 8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt with IP's: []
I0917 16:56:28.968623 8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt ...
I0917 16:56:28.968654 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: {Name:mk46c9961b7a69dcf4244920ae9b53f74531c8f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.968845 8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key ...
I0917 16:56:28.968858 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key: {Name:mkabebac27c97b452bd7d9ef33854e678b183637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:28.968935 8324 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3
I0917 16:56:28.968957 8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0917 16:56:29.654182 8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 ...
I0917 16:56:29.654217 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3: {Name:mkfaaf4f5785d6a022ab25176501f2976c697923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:29.654404 8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3 ...
I0917 16:56:29.654419 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3: {Name:mkb5190f64ff492ef601549075a5272edd628524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:29.654504 8324 certs.go:381] copying /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 -> /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt
I0917 16:56:29.654584 8324 certs.go:385] copying /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3 -> /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key
I0917 16:56:29.654642 8324 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key
I0917 16:56:29.654662 8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt with IP's: []
I0917 16:56:30.128281 8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt ...
I0917 16:56:30.128319 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt: {Name:mkfa931092eb47812407266ea7eeb67a77f37b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:30.128510 8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key ...
I0917 16:56:30.128520 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key: {Name:mkf41edf2a4e0e824181ad6770270ae692211c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:30.128704 8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem (1675 bytes)
I0917 16:56:30.128740 8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem (1078 bytes)
I0917 16:56:30.128772 8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem (1123 bytes)
I0917 16:56:30.128832 8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem (1679 bytes)
I0917 16:56:30.129535 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0917 16:56:30.165849 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0917 16:56:30.202717 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0917 16:56:30.231662 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0917 16:56:30.258967 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0917 16:56:30.285873 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0917 16:56:30.311959 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0917 16:56:30.341571 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0917 16:56:30.367547 8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0917 16:56:30.392209 8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0917 16:56:30.411818 8324 ssh_runner.go:195] Run: openssl version
I0917 16:56:30.417421 8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0917 16:56:30.427791 8324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0917 16:56:30.431305 8324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
I0917 16:56:30.431372 8324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0917 16:56:30.438568 8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0917 16:56:30.447706 8324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0917 16:56:30.450942 8324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0917 16:56:30.450991 8324 kubeadm.go:392] StartCluster: {Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0917 16:56:30.451131 8324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0917 16:56:30.468436 8324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0917 16:56:30.477437 8324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0917 16:56:30.486716 8324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0917 16:56:30.486797 8324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0917 16:56:30.495714 8324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0917 16:56:30.495734 8324 kubeadm.go:157] found existing configuration files:
I0917 16:56:30.495789 8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0917 16:56:30.505108 8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0917 16:56:30.505173 8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0917 16:56:30.513675 8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0917 16:56:30.522614 8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0917 16:56:30.522696 8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0917 16:56:30.531834 8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0917 16:56:30.540930 8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0917 16:56:30.541028 8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0917 16:56:30.549645 8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0917 16:56:30.558681 8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0917 16:56:30.558770 8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0917 16:56:30.567442 8324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0917 16:56:30.605214 8324 kubeadm.go:310] W0917 16:56:30.604506 1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0917 16:56:30.606423 8324 kubeadm.go:310] W0917 16:56:30.605815 1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0917 16:56:30.631753 8324 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0917 16:56:30.691902 8324 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0917 16:56:46.616572 8324 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0917 16:56:46.616632 8324 kubeadm.go:310] [preflight] Running pre-flight checks
I0917 16:56:46.616727 8324 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0917 16:56:46.616824 8324 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0917 16:56:46.616874 8324 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0917 16:56:46.616930 8324 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0917 16:56:46.616985 8324 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0917 16:56:46.617033 8324 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0917 16:56:46.617080 8324 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0917 16:56:46.617129 8324 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0917 16:56:46.617179 8324 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0917 16:56:46.617224 8324 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0917 16:56:46.617285 8324 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0917 16:56:46.617332 8324 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0917 16:56:46.617409 8324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0917 16:56:46.617523 8324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0917 16:56:46.617613 8324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0917 16:56:46.617716 8324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0917 16:56:46.619748 8324 out.go:235] - Generating certificates and keys ...
I0917 16:56:46.619851 8324 kubeadm.go:310] [certs] Using existing ca certificate authority
I0917 16:56:46.619950 8324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0917 16:56:46.620054 8324 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0917 16:56:46.620136 8324 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0917 16:56:46.620212 8324 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0917 16:56:46.620271 8324 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0917 16:56:46.620331 8324 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0917 16:56:46.620451 8324 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-731605 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0917 16:56:46.620507 8324 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0917 16:56:46.620640 8324 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-731605 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0917 16:56:46.620710 8324 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0917 16:56:46.620778 8324 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0917 16:56:46.620826 8324 kubeadm.go:310] [certs] Generating "sa" key and public key
I0917 16:56:46.620884 8324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0917 16:56:46.620939 8324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0917 16:56:46.620998 8324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0917 16:56:46.621060 8324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0917 16:56:46.621127 8324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0917 16:56:46.621185 8324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0917 16:56:46.621268 8324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0917 16:56:46.621337 8324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0917 16:56:46.623364 8324 out.go:235] - Booting up control plane ...
I0917 16:56:46.623521 8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0917 16:56:46.623613 8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0917 16:56:46.623754 8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0917 16:56:46.623893 8324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0917 16:56:46.624002 8324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0917 16:56:46.624073 8324 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0917 16:56:46.624245 8324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0917 16:56:46.624373 8324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0917 16:56:46.624446 8324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000730106s
I0917 16:56:46.624538 8324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0917 16:56:46.624631 8324 kubeadm.go:310] [api-check] The API server is healthy after 6.014822155s
I0917 16:56:46.624786 8324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0917 16:56:46.624942 8324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0917 16:56:46.625005 8324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0917 16:56:46.625194 8324 kubeadm.go:310] [mark-control-plane] Marking the node addons-731605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0917 16:56:46.625253 8324 kubeadm.go:310] [bootstrap-token] Using token: mh7oco.y3y87sddnrom4oau
I0917 16:56:46.627235 8324 out.go:235] - Configuring RBAC rules ...
I0917 16:56:46.627423 8324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0917 16:56:46.627552 8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0917 16:56:46.627764 8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0917 16:56:46.627953 8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0917 16:56:46.628112 8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0917 16:56:46.628218 8324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0917 16:56:46.628345 8324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0917 16:56:46.628396 8324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0917 16:56:46.628452 8324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0917 16:56:46.628464 8324 kubeadm.go:310]
I0917 16:56:46.628528 8324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0917 16:56:46.628539 8324 kubeadm.go:310]
I0917 16:56:46.628621 8324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0917 16:56:46.628632 8324 kubeadm.go:310]
I0917 16:56:46.628659 8324 kubeadm.go:310] mkdir -p $HOME/.kube
I0917 16:56:46.628729 8324 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0917 16:56:46.628790 8324 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0917 16:56:46.628800 8324 kubeadm.go:310]
I0917 16:56:46.628857 8324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0917 16:56:46.628867 8324 kubeadm.go:310]
I0917 16:56:46.628918 8324 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0917 16:56:46.628929 8324 kubeadm.go:310]
I0917 16:56:46.628985 8324 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0917 16:56:46.629070 8324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0917 16:56:46.629145 8324 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0917 16:56:46.629152 8324 kubeadm.go:310]
I0917 16:56:46.629241 8324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0917 16:56:46.629325 8324 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0917 16:56:46.629333 8324 kubeadm.go:310]
I0917 16:56:46.629422 8324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mh7oco.y3y87sddnrom4oau \
I0917 16:56:46.629534 8324 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cc495b07a00a58e640dc79cf1f74d56bfb00f4839c2d5eb8e4adc88dc1953060 \
I0917 16:56:46.629559 8324 kubeadm.go:310] --control-plane
I0917 16:56:46.629567 8324 kubeadm.go:310]
I0917 16:56:46.629657 8324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0917 16:56:46.629669 8324 kubeadm.go:310]
I0917 16:56:46.629756 8324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mh7oco.y3y87sddnrom4oau \
I0917 16:56:46.629880 8324 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cc495b07a00a58e640dc79cf1f74d56bfb00f4839c2d5eb8e4adc88dc1953060
I0917 16:56:46.629896 8324 cni.go:84] Creating CNI manager for ""
I0917 16:56:46.629953 8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0917 16:56:46.631966 8324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0917 16:56:46.634041 8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0917 16:56:46.642808 8324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0917 16:56:46.661548 8324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0917 16:56:46.661638 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:46.661731 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-731605 minikube.k8s.io/updated_at=2024_09_17T16_56_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-731605 minikube.k8s.io/primary=true
I0917 16:56:46.798498 8324 ops.go:34] apiserver oom_adj: -16
I0917 16:56:46.798632 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:47.298736 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:47.799332 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:48.299527 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:48.799176 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:49.299672 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:49.799482 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:50.298728 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:50.799484 8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0917 16:56:50.904836 8324 kubeadm.go:1113] duration metric: took 4.24326739s to wait for elevateKubeSystemPrivileges
I0917 16:56:50.904872 8324 kubeadm.go:394] duration metric: took 20.453884373s to StartCluster
I0917 16:56:50.904889 8324 settings.go:142] acquiring lock: {Name:mkdb6771861a9971ad02b34bc008b515d936ba60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:50.905019 8324 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19662-2253/kubeconfig
I0917 16:56:50.905478 8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/kubeconfig: {Name:mk7c603d8d76f3ca0de80c5b79069197b0c670fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0917 16:56:50.905678 8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0917 16:56:50.905699 8324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0917 16:56:50.906093 8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 16:56:50.906149 8324 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0917 16:56:50.906278 8324 addons.go:69] Setting yakd=true in profile "addons-731605"
I0917 16:56:50.906301 8324 addons.go:234] Setting addon yakd=true in "addons-731605"
I0917 16:56:50.906327 8324 addons.go:69] Setting inspektor-gadget=true in profile "addons-731605"
I0917 16:56:50.906354 8324 addons.go:69] Setting metrics-server=true in profile "addons-731605"
I0917 16:56:50.906372 8324 addons.go:234] Setting addon metrics-server=true in "addons-731605"
I0917 16:56:50.906391 8324 addons.go:69] Setting cloud-spanner=true in profile "addons-731605"
I0917 16:56:50.906420 8324 addons.go:234] Setting addon cloud-spanner=true in "addons-731605"
I0917 16:56:50.906447 8324 addons.go:69] Setting storage-provisioner=true in profile "addons-731605"
I0917 16:56:50.906484 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.906508 8324 addons.go:234] Setting addon storage-provisioner=true in "addons-731605"
I0917 16:56:50.906543 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.907048 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.907080 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.906414 8324 addons.go:69] Setting registry=true in profile "addons-731605"
I0917 16:56:50.907535 8324 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-731605"
I0917 16:56:50.907542 8324 addons.go:234] Setting addon registry=true in "addons-731605"
I0917 16:56:50.907552 8324 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-731605"
I0917 16:56:50.907570 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.907641 8324 addons.go:69] Setting volcano=true in profile "addons-731605"
I0917 16:56:50.907651 8324 addons.go:234] Setting addon volcano=true in "addons-731605"
I0917 16:56:50.907715 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.908076 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.908177 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.910941 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.912999 8324 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-731605"
I0917 16:56:50.913110 8324 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-731605"
I0917 16:56:50.913161 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.913769 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.918010 8324 addons.go:69] Setting default-storageclass=true in profile "addons-731605"
I0917 16:56:50.918052 8324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-731605"
I0917 16:56:50.918387 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.921298 8324 addons.go:69] Setting volumesnapshots=true in profile "addons-731605"
I0917 16:56:50.921386 8324 addons.go:234] Setting addon volumesnapshots=true in "addons-731605"
I0917 16:56:50.921462 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.922046 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.945721 8324 addons.go:69] Setting gcp-auth=true in profile "addons-731605"
I0917 16:56:50.945814 8324 mustload.go:65] Loading cluster: addons-731605
I0917 16:56:50.946056 8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 16:56:50.946401 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.971592 8324 addons.go:69] Setting ingress=true in profile "addons-731605"
I0917 16:56:50.971668 8324 addons.go:234] Setting addon ingress=true in "addons-731605"
I0917 16:56:50.971754 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:50.972286 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.972618 8324 out.go:177] * Verifying Kubernetes components...
I0917 16:56:50.974779 8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0917 16:56:51.003216 8324 addons.go:69] Setting ingress-dns=true in profile "addons-731605"
I0917 16:56:51.003295 8324 addons.go:234] Setting addon ingress-dns=true in "addons-731605"
I0917 16:56:51.003365 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.004003 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:50.906356 8324 addons.go:234] Setting addon inspektor-gadget=true in "addons-731605"
I0917 16:56:51.044289 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.044867 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.048026 8324 out.go:177] - Using image docker.io/registry:2.8.3
I0917 16:56:51.053808 8324 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0917 16:56:51.058819 8324 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0917 16:56:51.058885 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0917 16:56:51.058986 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:50.906331 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.061808 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.075588 8324 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0917 16:56:50.906407 8324 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-731605"
I0917 16:56:51.085516 8324 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-731605"
I0917 16:56:51.085595 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.086120 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.089417 8324 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0917 16:56:51.089518 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0917 16:56:51.089622 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:50.906399 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.103641 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.106305 8324 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0917 16:56:51.108417 8324 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0917 16:56:51.108488 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0917 16:56:51.108594 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.138837 8324 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-731605"
I0917 16:56:51.138943 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.139466 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.167772 8324 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0917 16:56:51.176333 8324 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0917 16:56:51.180193 8324 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0917 16:56:51.211198 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0917 16:56:51.215933 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0917 16:56:51.215958 8324 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0917 16:56:51.216029 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.222934 8324 addons.go:234] Setting addon default-storageclass=true in "addons-731605"
I0917 16:56:51.222975 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.223386 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:51.242475 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:51.246714 8324 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0917 16:56:51.250233 8324 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0917 16:56:51.252273 8324 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0917 16:56:51.254974 8324 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0917 16:56:51.254998 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0917 16:56:51.255065 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.258109 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0917 16:56:51.261209 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0917 16:56:51.266033 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0917 16:56:51.268897 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0917 16:56:51.270734 8324 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0917 16:56:51.272973 8324 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0917 16:56:51.273992 8324 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0917 16:56:51.327986 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0917 16:56:51.328006 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0917 16:56:51.328079 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.328341 8324 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0917 16:56:51.328692 8324 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0917 16:56:51.331680 8324 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0917 16:56:51.331724 8324 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0917 16:56:51.331793 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.348747 8324 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0917 16:56:51.353583 8324 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0917 16:56:51.353612 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0917 16:56:51.353683 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.388185 8324 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0917 16:56:51.388205 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0917 16:56:51.388269 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.392955 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.395528 8324 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0917 16:56:51.395646 8324 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0917 16:56:51.397682 8324 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0917 16:56:51.397705 8324 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0917 16:56:51.397771 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.410024 8324 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0917 16:56:51.410052 8324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0917 16:56:51.410137 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.431031 8324 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0917 16:56:51.431280 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.435062 8324 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0917 16:56:51.435084 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0917 16:56:51.435164 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.447301 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.465076 8324 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0917 16:56:51.467630 8324 out.go:177] - Using image docker.io/busybox:stable
I0917 16:56:51.470966 8324 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0917 16:56:51.470990 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0917 16:56:51.471059 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.477702 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.542726 8324 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0917 16:56:51.542748 8324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0917 16:56:51.542809 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:51.561051 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.605182 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.613199 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.629904 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.634383 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.635066 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.654236 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.660266 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
W0917 16:56:51.670675 8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0917 16:56:51.670703 8324 retry.go:31] will retry after 267.773389ms: ssh: handshake failed: EOF
I0917 16:56:51.681036 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:51.683405 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:52.079113 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0917 16:56:52.240902 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0917 16:56:52.258782 8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0917 16:56:52.258855 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0917 16:56:52.268085 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0917 16:56:52.357781 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0917 16:56:52.384265 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0917 16:56:52.545881 8324 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0917 16:56:52.545952 8324 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0917 16:56:52.617430 8324 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0917 16:56:52.617458 8324 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0917 16:56:52.644641 8324 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0917 16:56:52.644683 8324 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0917 16:56:52.668617 8324 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.693773142s)
I0917 16:56:52.668799 8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0917 16:56:52.668714 8324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.763004076s)
I0917 16:56:52.669055 8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0917 16:56:52.763046 8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0917 16:56:52.763123 8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0917 16:56:52.836567 8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0917 16:56:52.836646 8324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0917 16:56:52.840236 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0917 16:56:52.840298 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0917 16:56:52.916848 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0917 16:56:52.937387 8324 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0917 16:56:52.937463 8324 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0917 16:56:52.982524 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0917 16:56:53.001928 8324 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0917 16:56:53.001990 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0917 16:56:53.006755 8324 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0917 16:56:53.006833 8324 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0917 16:56:53.010099 8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0917 16:56:53.010170 8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0917 16:56:53.054800 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0917 16:56:53.090943 8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0917 16:56:53.091025 8324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0917 16:56:53.110065 8324 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0917 16:56:53.110141 8324 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0917 16:56:53.112705 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0917 16:56:53.112784 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0917 16:56:53.209040 8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0917 16:56:53.209119 8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0917 16:56:53.256034 8324 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0917 16:56:53.256057 8324 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0917 16:56:53.292404 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0917 16:56:53.307753 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0917 16:56:53.390058 8324 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0917 16:56:53.390085 8324 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0917 16:56:53.405166 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0917 16:56:53.405192 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0917 16:56:53.413149 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0917 16:56:53.413172 8324 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0917 16:56:53.531658 8324 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0917 16:56:53.531698 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0917 16:56:53.688577 8324 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0917 16:56:53.688604 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0917 16:56:53.766753 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0917 16:56:53.766781 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0917 16:56:53.784071 8324 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0917 16:56:53.784098 8324 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0917 16:56:53.887115 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0917 16:56:53.950740 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0917 16:56:54.024412 8324 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0917 16:56:54.024441 8324 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0917 16:56:54.107322 8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0917 16:56:54.107349 8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0917 16:56:54.198616 8324 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0917 16:56:54.198644 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0917 16:56:54.613917 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0917 16:56:54.669441 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0917 16:56:54.669467 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0917 16:56:55.459932 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0917 16:56:55.459958 8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0917 16:56:55.699778 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0917 16:56:55.699804 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0917 16:56:56.105133 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0917 16:56:56.105159 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0917 16:56:56.615228 8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0917 16:56:56.615259 8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0917 16:56:56.839051 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0917 16:56:58.255235 8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0917 16:56:58.255328 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:58.297703 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:56:59.314136 8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0917 16:56:59.336468 8324 addons.go:234] Setting addon gcp-auth=true in "addons-731605"
I0917 16:56:59.336519 8324 host.go:66] Checking if "addons-731605" exists ...
I0917 16:56:59.336978 8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
I0917 16:56:59.361637 8324 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0917 16:56:59.361690 8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
I0917 16:56:59.386667 8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
I0917 16:57:01.893348 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.814197517s)
I0917 16:57:01.893387 8324 addons.go:475] Verifying addon ingress=true in "addons-731605"
I0917 16:57:01.893647 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.652644363s)
I0917 16:57:01.893756 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.625601437s)
I0917 16:57:01.895591 8324 out.go:177] * Verifying ingress addon...
I0917 16:57:01.899069 8324 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0917 16:57:01.905859 8324 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0917 16:57:01.905890 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:02.426649 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:02.911118 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:03.462457 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:03.492416 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.134602719s)
I0917 16:57:03.492472 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.1081721s)
I0917 16:57:03.492512 8324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.823437379s)
I0917 16:57:03.492578 8324 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0917 16:57:03.492598 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.510006485s)
I0917 16:57:03.492819 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.437945116s)
I0917 16:57:03.492867 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.20043852s)
I0917 16:57:03.492879 8324 addons.go:475] Verifying addon registry=true in "addons-731605"
I0917 16:57:03.493420 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.185630011s)
I0917 16:57:03.493443 8324 addons.go:475] Verifying addon metrics-server=true in "addons-731605"
I0917 16:57:03.493491 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.606348935s)
I0917 16:57:03.493814 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.543039719s)
W0917 16:57:03.493854 8324 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0917 16:57:03.493876 8324 retry.go:31] will retry after 363.451603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0917 16:57:03.493992 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.88004417s)
I0917 16:57:03.492524 8324 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.823709777s)
I0917 16:57:03.492559 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.575645446s)
I0917 16:57:03.494961 8324 node_ready.go:35] waiting up to 6m0s for node "addons-731605" to be "Ready" ...
I0917 16:57:03.496341 8324 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-731605 service yakd-dashboard -n yakd-dashboard
I0917 16:57:03.496344 8324 out.go:177] * Verifying registry addon...
I0917 16:57:03.499415 8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0917 16:57:03.527640 8324 node_ready.go:49] node "addons-731605" has status "Ready":"True"
I0917 16:57:03.527665 8324 node_ready.go:38] duration metric: took 32.640905ms for node "addons-731605" to be "Ready" ...
I0917 16:57:03.527678 8324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
W0917 16:57:03.594562 8324 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0917 16:57:03.610893 8324 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0917 16:57:03.610965 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:03.631757 8324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace to be "Ready" ...
I0917 16:57:03.857537 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0917 16:57:03.991542 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:04.011840 8324 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-731605" context rescaled to 1 replicas
I0917 16:57:04.064205 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:04.406074 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:04.508819 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:04.819573 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.980428015s)
I0917 16:57:04.819608 8324 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-731605"
I0917 16:57:04.819834 8324 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.45817447s)
I0917 16:57:04.823245 8324 out.go:177] * Verifying csi-hostpath-driver addon...
I0917 16:57:04.823378 8324 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0917 16:57:04.826093 8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0917 16:57:04.828510 8324 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0917 16:57:04.830490 8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0917 16:57:04.830553 8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0917 16:57:04.848154 8324 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0917 16:57:04.848233 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:04.903828 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:04.962663 8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0917 16:57:04.962749 8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0917 16:57:05.004171 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:05.051053 8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0917 16:57:05.051097 8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0917 16:57:05.087604 8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0917 16:57:05.332581 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:05.406836 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:05.504040 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:05.639121 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:05.831625 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:05.904453 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:06.003149 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:06.333043 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:06.403439 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:06.435668 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.578034313s)
I0917 16:57:06.503833 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:06.678189 8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.59054311s)
I0917 16:57:06.681223 8324 addons.go:475] Verifying addon gcp-auth=true in "addons-731605"
I0917 16:57:06.683431 8324 out.go:177] * Verifying gcp-auth addon...
I0917 16:57:06.686643 8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0917 16:57:06.692684 8324 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0917 16:57:06.835655 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:06.903854 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:07.003946 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:07.330833 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:07.403484 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:07.503009 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:07.832267 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:07.932124 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:08.003559 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:08.138766 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:08.331273 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:08.404265 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:08.503642 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:08.831878 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:08.904908 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:09.004550 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:09.331048 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:09.405193 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:09.503349 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:09.831748 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:09.932748 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:10.003925 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:10.139053 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:10.330602 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:10.404645 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:10.503372 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:10.831516 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:10.913229 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:11.005244 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:11.330596 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:11.403833 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:11.503600 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:11.831622 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:11.904223 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:12.003946 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:12.142964 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:12.331061 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:12.404309 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:12.503409 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:12.831172 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:12.905658 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:13.003706 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:13.330499 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:13.404285 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:13.503452 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:13.831879 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:13.904437 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:14.003160 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:14.331158 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:14.404062 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:14.503269 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:14.639587 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:14.831801 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:14.903570 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:15.004035 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:15.331295 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:15.404603 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:15.503408 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:15.831319 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:15.903935 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:16.003138 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:16.331341 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:16.405170 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:16.503924 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:16.830811 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:16.905135 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:17.004130 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:17.139152 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:17.330300 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:17.404453 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:17.504774 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:17.831280 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:17.904267 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:18.003944 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:18.331054 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:18.404269 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:18.504113 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:18.831380 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:18.903741 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:19.004167 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:19.330926 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:19.404566 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:19.503329 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:19.638850 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:19.832548 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:19.904502 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:20.013084 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:20.332621 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:20.405051 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:20.504818 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:20.830676 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:20.904151 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:21.004080 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:21.331457 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:21.403922 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:21.504322 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:21.831346 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:21.904149 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:22.003581 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:22.138687 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:22.332223 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:22.405834 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:22.503795 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:22.832336 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:22.909866 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:23.004676 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:23.330798 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:23.404038 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:23.503486 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:23.830792 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:23.903540 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:24.003480 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:24.331773 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:24.403438 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:24.503555 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:24.639174 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:24.831875 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:24.915377 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:25.004907 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:25.332160 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:25.405080 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:25.503618 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:25.831729 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:25.903996 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:26.005317 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:26.331789 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:26.432757 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:26.504651 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0917 16:57:26.831661 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:26.904677 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:27.003531 8324 kapi.go:107] duration metric: took 23.504113916s to wait for kubernetes.io/minikube-addons=registry ...
I0917 16:57:27.138981 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:27.331617 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:27.432460 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:27.831777 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:27.903614 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:28.332229 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:28.404742 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:28.832787 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:28.906577 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:29.140087 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:29.335332 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:29.406037 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:29.834160 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:29.905371 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:30.332728 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:30.405853 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:30.831585 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:30.904338 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:31.140981 8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:31.332266 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:31.404535 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:31.831278 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:31.903738 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:32.330847 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:32.404238 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:32.831312 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:32.905359 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:33.139101 8324 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.139126 8324 pod_ready.go:82] duration metric: took 29.507290643s for pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.139139 8324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.143318 8324 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tg5hf" not found
I0917 16:57:33.143394 8324 pod_ready.go:82] duration metric: took 4.244543ms for pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace to be "Ready" ...
E0917 16:57:33.143421 8324 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tg5hf" not found
I0917 16:57:33.143459 8324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.151817 8324 pod_ready.go:93] pod "etcd-addons-731605" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.151849 8324 pod_ready.go:82] duration metric: took 8.363435ms for pod "etcd-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.151866 8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.157475 8324 pod_ready.go:93] pod "kube-apiserver-addons-731605" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.157506 8324 pod_ready.go:82] duration metric: took 5.629396ms for pod "kube-apiserver-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.157521 8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.162758 8324 pod_ready.go:93] pod "kube-controller-manager-addons-731605" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.162784 8324 pod_ready.go:82] duration metric: took 5.251759ms for pod "kube-controller-manager-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.162800 8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dzqf4" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.331768 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:33.337262 8324 pod_ready.go:93] pod "kube-proxy-dzqf4" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.337289 8324 pod_ready.go:82] duration metric: took 174.482289ms for pod "kube-proxy-dzqf4" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.337304 8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.403927 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:33.736912 8324 pod_ready.go:93] pod "kube-scheduler-addons-731605" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:33.736939 8324 pod_ready.go:82] duration metric: took 399.626526ms for pod "kube-scheduler-addons-731605" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.736951 8324 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace to be "Ready" ...
I0917 16:57:33.832926 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:33.904859 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:34.331230 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:34.405234 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:34.834454 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:34.903760 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:35.332426 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:35.404235 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:35.744845 8324 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:35.832461 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:35.904403 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:36.331770 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:36.404612 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:36.832349 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:36.907147 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:37.336236 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:37.405349 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:37.831817 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:37.914078 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:38.245834 8324 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"False"
I0917 16:57:38.331611 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:38.420490 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:38.745107 8324 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:38.745132 8324 pod_ready.go:82] duration metric: took 5.008172958s for pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace to be "Ready" ...
I0917 16:57:38.745145 8324 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace to be "Ready" ...
I0917 16:57:38.751921 8324 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace has status "Ready":"True"
I0917 16:57:38.751947 8324 pod_ready.go:82] duration metric: took 6.793092ms for pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace to be "Ready" ...
I0917 16:57:38.751969 8324 pod_ready.go:39] duration metric: took 35.224225146s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0917 16:57:38.751987 8324 api_server.go:52] waiting for apiserver process to appear ...
I0917 16:57:38.752053 8324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0917 16:57:38.771916 8324 api_server.go:72] duration metric: took 47.866175002s to wait for apiserver process to appear ...
I0917 16:57:38.771943 8324 api_server.go:88] waiting for apiserver healthz status ...
I0917 16:57:38.771965 8324 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0917 16:57:38.782402 8324 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0917 16:57:38.783801 8324 api_server.go:141] control plane version: v1.31.1
I0917 16:57:38.783827 8324 api_server.go:131] duration metric: took 11.876298ms to wait for apiserver health ...
I0917 16:57:38.783835 8324 system_pods.go:43] waiting for kube-system pods to appear ...
I0917 16:57:38.792686 8324 system_pods.go:59] 17 kube-system pods found
I0917 16:57:38.792724 8324 system_pods.go:61] "coredns-7c65d6cfc9-nfdb2" [4a2ff10d-66fd-4411-aeee-a6fd0f092c93] Running
I0917 16:57:38.792735 8324 system_pods.go:61] "csi-hostpath-attacher-0" [efb49be6-b3cb-46a6-ab37-9da589ebee49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0917 16:57:38.792742 8324 system_pods.go:61] "csi-hostpath-resizer-0" [8e281caa-9272-4066-8edc-1969e947de38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0917 16:57:38.792751 8324 system_pods.go:61] "csi-hostpathplugin-kmvnn" [5856def8-de60-43e9-8c1b-df459e3126c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0917 16:57:38.792756 8324 system_pods.go:61] "etcd-addons-731605" [d6b238f1-7f72-40cf-b74e-ed79a0174ca8] Running
I0917 16:57:38.792761 8324 system_pods.go:61] "kube-apiserver-addons-731605" [d768f6b6-3871-4d26-85f4-54ec30c15e51] Running
I0917 16:57:38.792765 8324 system_pods.go:61] "kube-controller-manager-addons-731605" [9cc91090-ea89-4915-a417-1eb8e0859aba] Running
I0917 16:57:38.792772 8324 system_pods.go:61] "kube-ingress-dns-minikube" [7e5b92cd-d0ba-4c9d-b0b6-9efdbb97c241] Running
I0917 16:57:38.792775 8324 system_pods.go:61] "kube-proxy-dzqf4" [ea6f0a01-0aef-40a4-a999-4c7c9f47d4bb] Running
I0917 16:57:38.792780 8324 system_pods.go:61] "kube-scheduler-addons-731605" [7f6cb6a4-4c17-4876-99bf-cd6c418c3854] Running
I0917 16:57:38.792787 8324 system_pods.go:61] "metrics-server-84c5f94fbc-zjjq7" [5244e1a0-b041-4b8b-9a1a-97aa3d2df4f0] Running
I0917 16:57:38.792793 8324 system_pods.go:61] "nvidia-device-plugin-daemonset-9bwdv" [611e2832-baef-4884-ac81-badda29286e4] Running
I0917 16:57:38.792804 8324 system_pods.go:61] "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
I0917 16:57:38.792809 8324 system_pods.go:61] "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
I0917 16:57:38.792813 8324 system_pods.go:61] "snapshot-controller-56fcc65765-s6zlr" [36495912-63bd-4bf0-840e-6d78e14c70b9] Running
I0917 16:57:38.792817 8324 system_pods.go:61] "snapshot-controller-56fcc65765-vlpz9" [bdc32038-f926-486f-aae6-ed0f0ae51f25] Running
I0917 16:57:38.792826 8324 system_pods.go:61] "storage-provisioner" [27e44c10-971a-4e5f-96f6-bbba4e427bd0] Running
I0917 16:57:38.792833 8324 system_pods.go:74] duration metric: took 8.990908ms to wait for pod list to return data ...
I0917 16:57:38.792847 8324 default_sa.go:34] waiting for default service account to be created ...
I0917 16:57:38.831343 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:38.903961 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:38.935808 8324 default_sa.go:45] found service account: "default"
I0917 16:57:38.935838 8324 default_sa.go:55] duration metric: took 142.984297ms for default service account to be created ...
I0917 16:57:38.935849 8324 system_pods.go:116] waiting for k8s-apps to be running ...
I0917 16:57:39.144047 8324 system_pods.go:86] 17 kube-system pods found
I0917 16:57:39.144092 8324 system_pods.go:89] "coredns-7c65d6cfc9-nfdb2" [4a2ff10d-66fd-4411-aeee-a6fd0f092c93] Running
I0917 16:57:39.144105 8324 system_pods.go:89] "csi-hostpath-attacher-0" [efb49be6-b3cb-46a6-ab37-9da589ebee49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0917 16:57:39.144112 8324 system_pods.go:89] "csi-hostpath-resizer-0" [8e281caa-9272-4066-8edc-1969e947de38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0917 16:57:39.144122 8324 system_pods.go:89] "csi-hostpathplugin-kmvnn" [5856def8-de60-43e9-8c1b-df459e3126c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0917 16:57:39.144141 8324 system_pods.go:89] "etcd-addons-731605" [d6b238f1-7f72-40cf-b74e-ed79a0174ca8] Running
I0917 16:57:39.144147 8324 system_pods.go:89] "kube-apiserver-addons-731605" [d768f6b6-3871-4d26-85f4-54ec30c15e51] Running
I0917 16:57:39.144152 8324 system_pods.go:89] "kube-controller-manager-addons-731605" [9cc91090-ea89-4915-a417-1eb8e0859aba] Running
I0917 16:57:39.144166 8324 system_pods.go:89] "kube-ingress-dns-minikube" [7e5b92cd-d0ba-4c9d-b0b6-9efdbb97c241] Running
I0917 16:57:39.144171 8324 system_pods.go:89] "kube-proxy-dzqf4" [ea6f0a01-0aef-40a4-a999-4c7c9f47d4bb] Running
I0917 16:57:39.144178 8324 system_pods.go:89] "kube-scheduler-addons-731605" [7f6cb6a4-4c17-4876-99bf-cd6c418c3854] Running
I0917 16:57:39.144183 8324 system_pods.go:89] "metrics-server-84c5f94fbc-zjjq7" [5244e1a0-b041-4b8b-9a1a-97aa3d2df4f0] Running
I0917 16:57:39.144187 8324 system_pods.go:89] "nvidia-device-plugin-daemonset-9bwdv" [611e2832-baef-4884-ac81-badda29286e4] Running
I0917 16:57:39.144201 8324 system_pods.go:89] "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
I0917 16:57:39.144205 8324 system_pods.go:89] "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
I0917 16:57:39.144212 8324 system_pods.go:89] "snapshot-controller-56fcc65765-s6zlr" [36495912-63bd-4bf0-840e-6d78e14c70b9] Running
I0917 16:57:39.144222 8324 system_pods.go:89] "snapshot-controller-56fcc65765-vlpz9" [bdc32038-f926-486f-aae6-ed0f0ae51f25] Running
I0917 16:57:39.144226 8324 system_pods.go:89] "storage-provisioner" [27e44c10-971a-4e5f-96f6-bbba4e427bd0] Running
I0917 16:57:39.144237 8324 system_pods.go:126] duration metric: took 208.381251ms to wait for k8s-apps to be running ...
I0917 16:57:39.144247 8324 system_svc.go:44] waiting for kubelet service to be running ....
I0917 16:57:39.144305 8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0917 16:57:39.158522 8324 system_svc.go:56] duration metric: took 14.251023ms WaitForService to wait for kubelet
I0917 16:57:39.158557 8324 kubeadm.go:582] duration metric: took 48.252828761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0917 16:57:39.158583 8324 node_conditions.go:102] verifying NodePressure condition ...
I0917 16:57:39.331813 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:39.336531 8324 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0917 16:57:39.336606 8324 node_conditions.go:123] node cpu capacity is 2
I0917 16:57:39.336634 8324 node_conditions.go:105] duration metric: took 178.036695ms to run NodePressure ...
I0917 16:57:39.336661 8324 start.go:241] waiting for startup goroutines ...
I0917 16:57:39.404435 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:39.832184 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:39.904906 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:40.331453 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:40.404239 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:40.832283 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:40.904301 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:41.332809 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:41.403916 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:41.832157 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:41.905081 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:42.331312 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:42.405046 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:42.831741 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:42.905520 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:43.333109 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:43.405342 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:43.831133 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:43.903519 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:44.331175 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:44.404468 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:44.830972 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:44.931737 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:45.333553 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:45.405009 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:45.832344 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:45.905192 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:46.332140 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:46.403946 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:46.831697 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:46.903906 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:47.332107 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:47.404271 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:47.831079 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:47.903448 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:48.330732 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:48.403156 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:48.838937 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:48.938002 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:49.334638 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:49.403443 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:49.833247 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:49.904822 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:50.332835 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:50.404153 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:50.837532 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:50.905372 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:51.340582 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:51.405144 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:51.832217 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:51.932872 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:52.331413 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:52.403799 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:52.831388 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:52.904759 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:53.331115 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:53.403588 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:53.831959 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:53.904206 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:54.332420 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:54.404517 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:54.832217 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:54.904255 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:55.331030 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:55.403233 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:55.830862 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:55.903911 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:56.331733 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:56.405002 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:56.833035 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:56.903472 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:57.331019 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:57.404955 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:57.831364 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:57.903919 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:58.332462 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:58.431670 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:58.830600 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0917 16:57:58.903880 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:59.331245 8324 kapi.go:107] duration metric: took 54.505147526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0917 16:57:59.403456 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:57:59.904380 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:00.412664 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:00.903817 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:01.404315 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:01.904188 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:02.403250 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:02.902983 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:03.404321 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:03.903875 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:04.403630 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:04.904251 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:05.403855 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:05.904021 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:06.404086 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:06.903383 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:07.404517 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:07.905174 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:08.403284 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:08.904031 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:09.403764 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:09.904355 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:10.403462 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:10.904607 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:11.404648 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:11.904493 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:12.404097 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:12.904053 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:13.404535 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:13.903843 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:14.403634 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:14.904135 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:15.403103 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:15.903602 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:16.403291 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:16.904114 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:17.405937 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:17.906884 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:18.404512 8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0917 16:58:18.903545 8324 kapi.go:107] duration metric: took 1m17.004488036s to wait for app.kubernetes.io/name=ingress-nginx ...
I0917 16:58:30.197235 8324 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0917 16:58:30.197274 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:30.690605 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:31.190424 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:31.691421 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:32.190116 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:32.689779 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:33.190755 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:33.690906 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:34.190205 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:34.689784 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:35.191247 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:35.690345 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:36.191269 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:36.691247 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:37.190969 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:37.690864 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:38.190997 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:38.690862 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:39.191309 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:39.690402 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:40.190258 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:40.690113 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:41.190547 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:41.690094 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:42.190294 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:42.690794 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:43.190616 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:43.690491 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:44.196248 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:44.690669 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:45.192453 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:45.691393 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:46.190484 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:46.690058 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:47.191206 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:47.689765 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:48.194615 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:48.690143 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:49.190811 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:49.690611 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:50.190658 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:50.690105 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:51.190403 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:51.689773 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:52.189914 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:52.690304 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:53.191103 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:53.689895 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:54.191102 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:54.691121 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:55.192082 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:55.690389 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:56.190180 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:56.690676 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:57.190947 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:57.690992 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:58.190937 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:58.690534 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:59.191386 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:58:59.691178 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:00.215019 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:00.690090 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:01.191332 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:01.690492 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:02.190380 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:02.690819 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:03.190558 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:03.691507 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:04.190791 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:04.689969 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:05.190971 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:05.690234 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:06.190565 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:06.692131 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:07.190435 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:07.690918 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:08.190327 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:08.690023 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:09.190784 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:09.690547 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:10.190780 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:10.691366 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:11.190952 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:11.690883 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:12.191751 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:12.690045 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:13.191429 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:13.690262 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:14.189758 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:14.689980 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:15.191258 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:15.689798 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:16.189933 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:16.689984 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:17.190804 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:17.690483 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:18.190333 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:18.690662 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:19.190938 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:19.690351 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:20.191004 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:20.689694 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:21.190297 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:21.689832 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:22.190657 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:22.690268 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:23.190673 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:23.690301 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:24.189727 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:24.690272 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:25.190354 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:25.689887 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:26.190038 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:26.689762 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:27.190950 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:27.690088 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:28.191148 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:28.689879 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:29.190913 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:29.690702 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:30.191364 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:30.691010 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:31.191277 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:31.690258 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:32.190946 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:32.690744 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:33.190888 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:33.690302 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:34.189942 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:34.690763 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:35.190318 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:35.689745 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:36.190625 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:36.690961 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:37.191540 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:37.692441 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:38.191477 8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0917 16:59:38.690584 8324 kapi.go:107] duration metric: took 2m32.003940051s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0917 16:59:38.692857 8324 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-731605 cluster.
I0917 16:59:38.695271 8324 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0917 16:59:38.697257 8324 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0917 16:59:38.699177 8324 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, volcano, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0917 16:59:38.700969 8324 addons.go:510] duration metric: took 2m47.794830815s for enable addons: enabled=[storage-provisioner cloud-spanner volcano ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0917 16:59:38.701019 8324 start.go:246] waiting for cluster config update ...
I0917 16:59:38.701044 8324 start.go:255] writing updated cluster config ...
I0917 16:59:38.701337 8324 ssh_runner.go:195] Run: rm -f paused
I0917 16:59:39.046149 8324 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0917 16:59:39.048927 8324 out.go:177] * Done! kubectl is now configured to use "addons-731605" cluster and "default" namespace by default
==> Docker <==
Sep 17 17:09:21 addons-731605 dockerd[1280]: time="2024-09-17T17:09:21.100651994Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 17 17:09:21 addons-731605 dockerd[1280]: time="2024-09-17T17:09:21.104101291Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 17 17:09:24 addons-731605 dockerd[1280]: time="2024-09-17T17:09:24.897283978Z" level=info msg="ignoring event" container=a4b44604909714a55bcb9cd03abad1ede30788f26874f92fc5dc45569594cd88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:24 addons-731605 dockerd[1280]: time="2024-09-17T17:09:24.900961338Z" level=info msg="ignoring event" container=980659fd5d7b9b3905aafc2c40388060c608878ee6fbe258948d9a5dc774b1ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:25 addons-731605 dockerd[1280]: time="2024-09-17T17:09:25.098157593Z" level=info msg="ignoring event" container=0391ae36109d61c77a7ebd2f1bf62fcd8d259445ca68ab9780f3a6ff63a2f7fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:25 addons-731605 dockerd[1280]: time="2024-09-17T17:09:25.108543047Z" level=info msg="ignoring event" container=961d91f7e48500c751df5596a0500d5d6aeba1f17ae36f45acb66a5fa40a3fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:30 addons-731605 dockerd[1280]: time="2024-09-17T17:09:30.634067643Z" level=info msg="ignoring event" container=0a2ebfe99cfa4f17b07ae8f64336c067a2fd5737c07a1a03ee143d119e5ed627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:30 addons-731605 dockerd[1280]: time="2024-09-17T17:09:30.800600325Z" level=info msg="ignoring event" container=cb21244199262c49750eceddd86887f3d077d1202e7d03bdde74059829f826ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:31 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bcda20f4f0f2000c7fbf8a8882242b723aff37eb1f35a52f6946607402fdb17e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 17 17:09:31 addons-731605 dockerd[1280]: time="2024-09-17T17:09:31.765551811Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 17 17:09:32 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:32Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 17 17:09:32 addons-731605 dockerd[1280]: time="2024-09-17T17:09:32.455527382Z" level=info msg="ignoring event" container=777341523cd638e49172c9a3c59f5b2d4d5325a258b786907cdd8986b37780ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:34 addons-731605 dockerd[1280]: time="2024-09-17T17:09:34.618644738Z" level=info msg="ignoring event" container=bcda20f4f0f2000c7fbf8a8882242b723aff37eb1f35a52f6946607402fdb17e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:36 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 17 17:09:37 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:37Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
Sep 17 17:09:37 addons-731605 dockerd[1280]: time="2024-09-17T17:09:37.418382020Z" level=info msg="ignoring event" container=b7808becbd4f6887873d5bd0e3a852450142a927ab29ff4ae28d42d6970a2608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:38 addons-731605 dockerd[1280]: time="2024-09-17T17:09:38.694378001Z" level=info msg="ignoring event" container=1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:39 addons-731605 dockerd[1280]: time="2024-09-17T17:09:39.166133987Z" level=info msg="ignoring event" container=24a4a16392dcd1f868c3f045d3ebe339272f76c8ca00f3bee70adca946449480 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.230160801Z" level=info msg="ignoring event" container=90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.332164413Z" level=info msg="ignoring event" container=82075691c00f7c870a397e86b9da1fbbeb20c95df72a1c3f1efa767c30c353db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.571821340Z" level=info msg="ignoring event" container=60fb33f8c319d205b085e361642e2d8c816b39d85dcd51bff92f71aedd0131c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:40 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:40Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-r92r6_kube-system\": unexpected command output nsenter: cannot open /proc/3658/ns/net: No such file or directory\n with error: exit status 1"
Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.816043905Z" level=info msg="ignoring event" container=eb4d38db100b918816e866d3241ba3bf7ba0ec391ca6b36456cdcc214c125532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 17 17:09:40 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e66d23dac583851385b9e1ff80a425d15cd442ffc4e379ca4b964360014c424d/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 17 17:09:41 addons-731605 dockerd[1280]: time="2024-09-17T17:09:41.219615663Z" level=info msg="ignoring event" container=1e5306b090cde950c9d295d124a5df5a59d9133620fc195e91dfe74ab606ae89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1e5306b090cde fc9db2894f4e4 Less than a second ago Exited helper-pod 0 e66d23dac5838 helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
b7808becbd4f6 busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140 4 seconds ago Exited busybox 0 1c365ee7882c7 test-local-path
777341523cd63 busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 9 seconds ago Exited helper-pod 0 bcda20f4f0f20 helper-pod-create-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
23b2b53b3329d ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 46 seconds ago Exited gadget 7 c082e529a8874 gadget-rmt5s
8a68cd541b1d2 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 b17af86118a92 gcp-auth-89d5ffd79-qclfh
6d86e01a997e2 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 24b97b66662b4 ingress-nginx-controller-bc57996ff-dlbd6
a5985a5cbe6bc 420193b27261a 12 minutes ago Exited patch 1 76da58b1a25c2 ingress-nginx-admission-patch-wmwnk
2bc46ea09841e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 12 minutes ago Exited create 0 993a7ea37ed36 ingress-nginx-admission-create-h45mt
fd060edcacb72 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 08bb05d109808 metrics-server-84c5f94fbc-zjjq7
19eea6c0b3203 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 91be9c91543f3 local-path-provisioner-86d989889c-4twxk
82075691c00f7 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 12 minutes ago Exited registry-proxy 0 eb4d38db100b9 registry-proxy-r92r6
1b32503749711 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 b229efc62870e cloud-spanner-emulator-769b77f747-4nkn2
28b85fc14950d gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 8f149f15b6b0f kube-ingress-dns-minikube
b1206bfa1af02 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 7b09c4d7d64d2 storage-provisioner
aff19a29674a8 2f6c962e7b831 12 minutes ago Running coredns 0 f76430450ea4f coredns-7c65d6cfc9-nfdb2
1bfe9709592fb 24a140c548c07 12 minutes ago Running kube-proxy 0 ff139dc173e14 kube-proxy-dzqf4
671d0d8d947c8 d3f53a98c0a9d 13 minutes ago Running kube-apiserver 0 7a7390048d8fc kube-apiserver-addons-731605
7a1aea2005d68 7f8aa378bb47d 13 minutes ago Running kube-scheduler 0 c093fd56197c1 kube-scheduler-addons-731605
cddc5b3da9b13 27e3830e14027 13 minutes ago Running etcd 0 755d15375289f etcd-addons-731605
1e9ef30732ba5 279f381cb3736 13 minutes ago Running kube-controller-manager 0 4241d707d97a3 kube-controller-manager-addons-731605
==> controller_ingress [6d86e01a997e] <==
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
W0917 16:58:17.925838 6 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0917 16:58:17.926076 6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0917 16:58:17.936059 6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0917 16:58:18.538815 6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0917 16:58:18.556320 6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0917 16:58:18.570474 6 nginx.go:271] "Starting NGINX Ingress controller"
I0917 16:58:18.583125 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b37a8e76-dfd4-4875-a3f6-ac9a8bd2add3", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0917 16:58:18.595013 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ad826244-5b86-4edd-a84f-81f7dd149a3b", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0917 16:58:18.595271 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"424af7ac-27da-4d25-8391-2c5841750d15", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0917 16:58:19.772349 6 nginx.go:317] "Starting NGINX process"
I0917 16:58:19.772583 6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0917 16:58:19.772678 6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0917 16:58:19.772966 6 controller.go:193] "Configuration changes detected, backend reload required"
I0917 16:58:19.795984 6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0917 16:58:19.796008 6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-dlbd6"
I0917 16:58:19.813670 6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-dlbd6" node="addons-731605"
I0917 16:58:19.825734 6 controller.go:213] "Backend successfully reloaded"
I0917 16:58:19.825811 6 controller.go:224] "Initial sync, sleeping for 1 second"
I0917 16:58:19.825881 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-dlbd6", UID:"25179970-b297-4b7e-ad11-505505d0f732", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [aff19a29674a] <==
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1607234292]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 16:56:52.762) (total time: 30000ms):
Trace[1607234292]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:57:22.762)
Trace[1607234292]: [30.000332865s] [30.000332865s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1097650227]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 16:56:52.762) (total time: 30000ms):
Trace[1097650227]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:57:22.762)
Trace[1097650227]: [30.000269351s] [30.000269351s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
[INFO] Reloading complete
[INFO] 127.0.0.1:60555 - 14179 "HINFO IN 6978345999287434417.5096588345528641219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012255206s
[INFO] 10.244.0.25:53039 - 125 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000408366s
[INFO] 10.244.0.25:42689 - 17165 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000750477s
[INFO] 10.244.0.25:51585 - 46447 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000758264s
[INFO] 10.244.0.25:32841 - 51141 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000513498s
[INFO] 10.244.0.25:52300 - 14719 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012045s
[INFO] 10.244.0.25:55336 - 21098 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000371042s
[INFO] 10.244.0.25:38510 - 50227 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002856482s
[INFO] 10.244.0.25:33772 - 27234 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002867509s
[INFO] 10.244.0.25:43715 - 47735 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003736837s
[INFO] 10.244.0.25:36022 - 57122 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004384562s
==> describe nodes <==
Name: addons-731605
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-731605
kubernetes.io/os=linux
minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
minikube.k8s.io/name=addons-731605
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_17T16_56_46_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-731605
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 17 Sep 2024 16:56:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-731605
AcquireTime: <unset>
RenewTime: Tue, 17 Sep 2024 17:09:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 17 Sep 2024 17:05:25 +0000 Tue, 17 Sep 2024 16:56:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 17 Sep 2024 17:05:25 +0000 Tue, 17 Sep 2024 16:56:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 17 Sep 2024 17:05:25 +0000 Tue, 17 Sep 2024 16:56:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 17 Sep 2024 17:05:25 +0000 Tue, 17 Sep 2024 16:56:43 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-731605
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 286491942543477086e18cdc1090e9c3
System UUID: 5a48bcfa-b21a-45c7-a6db-3a28ea6859ee
Boot ID: fd8b8b92-550b-4c1f-b1a9-b9b8a832f9f6
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m18s
default cloud-spanner-emulator-769b77f747-4nkn2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-rmt5s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-qclfh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-dlbd6 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-nfdb2 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-731605 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-731605 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-731605 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-dzqf4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-731605 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-zjjq7 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
local-path-storage local-path-provisioner-86d989889c-4twxk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-731605 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-731605 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-731605 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-731605 event: Registered Node addons-731605 in Controller
==> dmesg <==
[Sep17 16:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.492852] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.848588] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.621504] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [cddc5b3da9b1] <==
{"level":"info","ts":"2024-09-17T16:56:40.092282Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-17T16:56:40.092293Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-17T16:56:40.471738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-17T16:56:40.471843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-17T16:56:40.471930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-17T16:56:40.471983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-17T16:56:40.472032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-17T16:56:40.472078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-17T16:56:40.472117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-17T16:56:40.479525Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-731605 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-17T16:56:40.479941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-17T16:56:40.480364Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-17T16:56:40.483705Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-17T16:56:40.484563Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-17T16:56:40.487711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-17T16:56:40.487741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-17T16:56:40.488347Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-17T16:56:40.488906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-17T16:56:40.489187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-17T16:56:40.491961Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-17T16:56:40.492150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-17T16:56:40.492264Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-17T17:06:41.174241Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
{"level":"info","ts":"2024-09-17T17:06:41.228246Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1852,"took":"53.308182ms","hash":3964704908,"current-db-size-bytes":9043968,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4960256,"current-db-size-in-use":"5.0 MB"}
{"level":"info","ts":"2024-09-17T17:06:41.228298Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3964704908,"revision":1852,"compact-revision":-1}
==> gcp-auth [8a68cd541b1d] <==
2024/09/17 16:59:37 GCP Auth Webhook started!
2024/09/17 16:59:56 Ready to marshal response ...
2024/09/17 16:59:56 Ready to write response ...
2024/09/17 16:59:57 Ready to marshal response ...
2024/09/17 16:59:57 Ready to write response ...
2024/09/17 17:00:23 Ready to marshal response ...
2024/09/17 17:00:23 Ready to write response ...
2024/09/17 17:00:23 Ready to marshal response ...
2024/09/17 17:00:23 Ready to write response ...
2024/09/17 17:00:23 Ready to marshal response ...
2024/09/17 17:00:23 Ready to write response ...
2024/09/17 17:08:38 Ready to marshal response ...
2024/09/17 17:08:38 Ready to write response ...
2024/09/17 17:08:47 Ready to marshal response ...
2024/09/17 17:08:47 Ready to write response ...
2024/09/17 17:09:08 Ready to marshal response ...
2024/09/17 17:09:08 Ready to write response ...
2024/09/17 17:09:31 Ready to marshal response ...
2024/09/17 17:09:31 Ready to write response ...
2024/09/17 17:09:31 Ready to marshal response ...
2024/09/17 17:09:31 Ready to write response ...
2024/09/17 17:09:40 Ready to marshal response ...
2024/09/17 17:09:40 Ready to write response ...
==> kernel <==
17:09:42 up 52 min, 0 users, load average: 2.25, 1.17, 0.81
Linux addons-731605 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [671d0d8d947c] <==
I0917 17:00:14.070064 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0917 17:00:14.423814 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0917 17:00:14.467633 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0917 17:00:14.517804 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0917 17:00:14.805828 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0917 17:00:15.071738 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0917 17:00:15.204099 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0917 17:00:15.223373 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0917 17:00:15.271368 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0917 17:00:15.518355 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0917 17:00:15.890119 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0917 17:08:55.071542 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0917 17:09:24.656377 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0917 17:09:24.656425 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0917 17:09:24.705432 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0917 17:09:24.705676 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0917 17:09:24.713721 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0917 17:09:24.713773 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0917 17:09:24.733279 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0917 17:09:24.733475 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0917 17:09:24.765201 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0917 17:09:24.765370 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0917 17:09:25.715453 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0917 17:09:25.765651 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0917 17:09:25.877959 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [1e9ef30732ba] <==
W0917 17:09:26.725332 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:26.725376 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:27.226141 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:27.226184 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:28.817394 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:28.817440 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:29.302630 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:29.302673 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:29.600455 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:29.600500 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:29.859495 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:29.859538 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:30.013805 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:30.013849 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:33.261053 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:33.261097 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:35.030848 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:35.030917 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:35.065227 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:35.065290 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0917 17:09:35.348738 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:35.348850 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0917 17:09:40.060997 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.168µs"
W0917 17:09:40.483418 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0917 17:09:40.483499 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [1bfe9709592f] <==
I0917 16:56:52.407348 1 server_linux.go:66] "Using iptables proxy"
I0917 16:56:52.520837 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0917 16:56:52.520910 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0917 16:56:52.563475 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0917 16:56:52.563723 1 server_linux.go:169] "Using iptables Proxier"
I0917 16:56:52.567874 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0917 16:56:52.568819 1 server.go:483] "Version info" version="v1.31.1"
I0917 16:56:52.568845 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0917 16:56:52.571471 1 config.go:199] "Starting service config controller"
I0917 16:56:52.571710 1 shared_informer.go:313] Waiting for caches to sync for service config
I0917 16:56:52.571745 1 config.go:105] "Starting endpoint slice config controller"
I0917 16:56:52.571750 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0917 16:56:52.576115 1 config.go:328] "Starting node config controller"
I0917 16:56:52.576432 1 shared_informer.go:313] Waiting for caches to sync for node config
I0917 16:56:52.672566 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0917 16:56:52.672617 1 shared_informer.go:320] Caches are synced for service config
I0917 16:56:52.676507 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [7a1aea2005d6] <==
W0917 16:56:43.181064 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0917 16:56:43.181099 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0917 16:56:43.181182 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0917 16:56:43.181205 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0917 16:56:43.180858 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0917 16:56:43.181273 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.015591 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0917 16:56:44.015770 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.124198 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0917 16:56:44.124245 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.176638 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0917 16:56:44.176760 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.185995 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0917 16:56:44.186238 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.246437 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0917 16:56:44.246647 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.334306 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0917 16:56:44.334588 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.358786 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0917 16:56:44.359028 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.388122 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0917 16:56:44.388166 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0917 16:56:44.498405 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0917 16:56:44.498447 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0917 16:56:47.169657 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.263367 2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-kube-api-access-6dcsc" (OuterVolumeSpecName: "kube-api-access-6dcsc") pod "06134add-c3e9-4ac8-acd2-14b40b0ed5e0" (UID: "06134add-c3e9-4ac8-acd2-14b40b0ed5e0"). InnerVolumeSpecName "kube-api-access-6dcsc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.331524 2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6dcsc\" (UniqueName: \"kubernetes.io/projected/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-kube-api-access-6dcsc\") on node \"addons-731605\" DevicePath \"\""
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.331575 2329 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-gcp-creds\") on node \"addons-731605\" DevicePath \"\""
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.635659 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9"
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.956069 2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06134add-c3e9-4ac8-acd2-14b40b0ed5e0" path="/var/lib/kubelet/pods/06134add-c3e9-4ac8-acd2-14b40b0ed5e0/volumes"
Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.956460 2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" path="/var/lib/kubelet/pods/eadbb459-0328-4ead-a0b9-83d8977e81e1/volumes"
Sep 17 17:09:40 addons-731605 kubelet[2329]: E0917 17:09:40.046550 2329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" containerName="busybox"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.046811 2329 memory_manager.go:354] "RemoveStaleState removing state" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" containerName="busybox"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.145213 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-script\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.145517 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-data\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.147175 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-gcp-creds\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.147340 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krgw\" (UniqueName: \"kubernetes.io/projected/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-kube-api-access-7krgw\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.653267 2329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjnl\" (UniqueName: \"kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl\") pod \"e7f2fc50-5c03-4aec-9040-85d9963af8e6\" (UID: \"e7f2fc50-5c03-4aec-9040-85d9963af8e6\") "
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.657933 2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl" (OuterVolumeSpecName: "kube-api-access-ncjnl") pod "e7f2fc50-5c03-4aec-9040-85d9963af8e6" (UID: "e7f2fc50-5c03-4aec-9040-85d9963af8e6"). InnerVolumeSpecName "kube-api-access-ncjnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.694713 2329 scope.go:117] "RemoveContainer" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.768297 2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ncjnl\" (UniqueName: \"kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl\") on node \"addons-731605\" DevicePath \"\""
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.787346 2329 scope.go:117] "RemoveContainer" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
Sep 17 17:09:40 addons-731605 kubelet[2329]: E0917 17:09:40.831642 2329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.831822 2329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"} err="failed to get container status \"90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a\": rpc error: code = Unknown desc = Error response from daemon: No such container: 90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.905918 2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66d23dac583851385b9e1ff80a425d15cd442ffc4e379ca4b964360014c424d"
Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.072254 2329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjmj\" (UniqueName: \"kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj\") pod \"5d64f5cf-2b0e-40f7-88ca-5822f9941c5a\" (UID: \"5d64f5cf-2b0e-40f7-88ca-5822f9941c5a\") "
Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.076813 2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj" (OuterVolumeSpecName: "kube-api-access-kmjmj") pod "5d64f5cf-2b0e-40f7-88ca-5822f9941c5a" (UID: "5d64f5cf-2b0e-40f7-88ca-5822f9941c5a"). InnerVolumeSpecName "kube-api-access-kmjmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.174310 2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kmjmj\" (UniqueName: \"kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj\") on node \"addons-731605\" DevicePath \"\""
Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.936033 2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f2fc50-5c03-4aec-9040-85d9963af8e6" path="/var/lib/kubelet/pods/e7f2fc50-5c03-4aec-9040-85d9963af8e6/volumes"
Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.970678 2329 scope.go:117] "RemoveContainer" containerID="82075691c00f7c870a397e86b9da1fbbeb20c95df72a1c3f1efa767c30c353db"
==> storage-provisioner [b1206bfa1af0] <==
I0917 16:56:57.816348 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0917 16:56:57.849407 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0917 16:56:57.849451 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0917 16:56:57.878324 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0917 16:56:57.878502 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f!
I0917 16:56:57.879827 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49612a31-d7be-4ca6-b014-a63c8813aa59", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f became leader
I0917 16:56:57.979083 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-731605 -n addons-731605
helpers_test.go:261: (dbg) Run: kubectl --context addons-731605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4: exit status 1 (153.602708ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-731605/192.168.49.2
Start Time: Tue, 17 Sep 2024 17:00:23 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkp4b (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkp4b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m20s default-scheduler Successfully assigned default/busybox to addons-731605
Normal Pulling 7m52s (x4 over 9m19s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m51s (x4 over 9m19s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m51s (x4 over 9m19s) kubelet Error: ErrImagePull
Warning Failed 7m26s (x6 over 9m19s) kubelet Error: ImagePullBackOff
Normal BackOff 4m12s (x20 over 9m19s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-h45mt" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-wmwnk" not found
Error from server (NotFound): pods "helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.48s)