=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 8.100394ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006709101s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003723867s
addons_test.go:338: (dbg) Run: kubectl --context addons-835847 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.115643754s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-835847 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-arm64 -p addons-835847 ip
2024/09/27 00:28:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-arm64 -p addons-835847 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-835847
helpers_test.go:235: (dbg) docker inspect addons-835847:
-- stdout --
[
{
"Id": "046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328",
"Created": "2024-09-27T00:15:19.16308535Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8860,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-27T00:15:19.321446461Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
"ResolvConfPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/hostname",
"HostsPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/hosts",
"LogPath": "/var/lib/docker/containers/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328/046d9d4a776ed344761bc7f5e95bd35684e762003d8995dc4e7905da5dc84328-json.log",
"Name": "/addons-835847",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-835847:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-835847",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347-init/diff:/var/lib/docker/overlay2/3144040d268400c51a492b73fb520261a7f283b4a42ff2b53daf66af92d700ae/diff",
"MergedDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/merged",
"UpperDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/diff",
"WorkDir": "/var/lib/docker/overlay2/a2add0cc7f3ae8b4b67cacce80025f784453cb160071effbded7f6be46467347/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-835847",
"Source": "/var/lib/docker/volumes/addons-835847/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-835847",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-835847",
"name.minikube.sigs.k8s.io": "addons-835847",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "597cf295b0e02e1436557b2c044f41661d1e1c07f8606aedce7c5595f9c72f37",
"SandboxKey": "/var/run/docker/netns/597cf295b0e0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-835847": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "8e3c5bf1226b19b8684ecbd9040cf26155bb704544af3f37377df089f6297817",
"EndpointID": "c44f81b040b7b4de068538501ffa0877e510e499f521b4143008d069d76987cd",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-835847",
"046d9d4a776e"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-835847 -n addons-835847
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-835847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-835847 logs -n 25: (1.073686964s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | -p download-only-739605 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p download-only-739605 | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | -o=json --download-only | download-only-574047 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | -p download-only-574047 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p download-only-574047 | download-only-574047 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p download-only-739605 | download-only-739605 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p download-only-574047 | download-only-574047 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | --download-only -p | download-docker-686350 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | download-docker-686350 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-686350 | download-docker-686350 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | --download-only -p | binary-mirror-571152 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | binary-mirror-571152 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:40555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-571152 | binary-mirror-571152 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| addons | disable dashboard -p | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | addons-835847 | | | | | |
| addons | enable dashboard -p | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | addons-835847 | | | | | |
| start | -p addons-835847 --wait=true | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:18 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-835847 addons disable | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:19 UTC | 27 Sep 24 00:19 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | enable headlamp | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
| | -p addons-835847 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-835847 addons disable | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-835847 addons | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-835847 addons | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-835847 ip | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
| addons | addons-835847 addons disable | addons-835847 | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/27 00:14:55
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0927 00:14:55.582964 8355 out.go:345] Setting OutFile to fd 1 ...
I0927 00:14:55.583345 8355 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:14:55.583376 8355 out.go:358] Setting ErrFile to fd 2...
I0927 00:14:55.583396 8355 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:14:55.583677 8355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-2273/.minikube/bin
I0927 00:14:55.584206 8355 out.go:352] Setting JSON to false
I0927 00:14:55.584969 8355 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3447,"bootTime":1727392649,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0927 00:14:55.585057 8355 start.go:139] virtualization:
I0927 00:14:55.588842 8355 out.go:177] * [addons-835847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0927 00:14:55.590766 8355 out.go:177] - MINIKUBE_LOCATION=19711
I0927 00:14:55.590834 8355 notify.go:220] Checking for updates...
I0927 00:14:55.593083 8355 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0927 00:14:55.594876 8355 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19711-2273/kubeconfig
I0927 00:14:55.596759 8355 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-2273/.minikube
I0927 00:14:55.598540 8355 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0927 00:14:55.600599 8355 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0927 00:14:55.602506 8355 driver.go:394] Setting default libvirt URI to qemu:///system
I0927 00:14:55.622917 8355 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0927 00:14:55.623038 8355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0927 00:14:55.689159 8355 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:14:55.67916877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0927 00:14:55.689277 8355 docker.go:318] overlay module found
I0927 00:14:55.691510 8355 out.go:177] * Using the docker driver based on user configuration
I0927 00:14:55.693178 8355 start.go:297] selected driver: docker
I0927 00:14:55.693197 8355 start.go:901] validating driver "docker" against <nil>
I0927 00:14:55.693223 8355 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0927 00:14:55.693868 8355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0927 00:14:55.747195 8355 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:14:55.738114718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0927 00:14:55.747395 8355 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0927 00:14:55.747630 8355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0927 00:14:55.749390 8355 out.go:177] * Using Docker driver with root privileges
I0927 00:14:55.750952 8355 cni.go:84] Creating CNI manager for ""
I0927 00:14:55.751027 8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:14:55.751040 8355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0927 00:14:55.751111 8355 start.go:340] cluster config:
{Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0927 00:14:55.753280 8355 out.go:177] * Starting "addons-835847" primary control-plane node in "addons-835847" cluster
I0927 00:14:55.755160 8355 cache.go:121] Beginning downloading kic base image for docker with docker
I0927 00:14:55.757060 8355 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
I0927 00:14:55.759333 8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:14:55.759382 8355 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0927 00:14:55.759394 8355 cache.go:56] Caching tarball of preloaded images
I0927 00:14:55.759408 8355 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
I0927 00:14:55.759485 8355 preload.go:172] Found /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0927 00:14:55.759496 8355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0927 00:14:55.759848 8355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json ...
I0927 00:14:55.759878 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json: {Name:mk28cc37583ccb48ee2b43c135e040bd4836d4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:14:55.774453 8355 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
I0927 00:14:55.774552 8355 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
I0927 00:14:55.774569 8355 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
I0927 00:14:55.774574 8355 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
I0927 00:14:55.774581 8355 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
I0927 00:14:55.774586 8355 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
I0927 00:15:12.509028 8355 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
I0927 00:15:12.509067 8355 cache.go:194] Successfully downloaded all kic artifacts
I0927 00:15:12.509096 8355 start.go:360] acquireMachinesLock for addons-835847: {Name:mkb615d14eff31a0a732f121850f6b6d555eb931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 00:15:12.509213 8355 start.go:364] duration metric: took 95.334µs to acquireMachinesLock for "addons-835847"
I0927 00:15:12.509244 8355 start.go:93] Provisioning new machine with config: &{Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0927 00:15:12.509322 8355 start.go:125] createHost starting for "" (driver="docker")
I0927 00:15:12.512025 8355 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0927 00:15:12.512275 8355 start.go:159] libmachine.API.Create for "addons-835847" (driver="docker")
I0927 00:15:12.512317 8355 client.go:168] LocalClient.Create starting
I0927 00:15:12.512431 8355 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem
I0927 00:15:12.862261 8355 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem
I0927 00:15:13.113712 8355 cli_runner.go:164] Run: docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 00:15:13.129492 8355 cli_runner.go:211] docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 00:15:13.129588 8355 network_create.go:284] running [docker network inspect addons-835847] to gather additional debugging logs...
I0927 00:15:13.129611 8355 cli_runner.go:164] Run: docker network inspect addons-835847
W0927 00:15:13.144407 8355 cli_runner.go:211] docker network inspect addons-835847 returned with exit code 1
I0927 00:15:13.144438 8355 network_create.go:287] error running [docker network inspect addons-835847]: docker network inspect addons-835847: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-835847 not found
I0927 00:15:13.144452 8355 network_create.go:289] output of [docker network inspect addons-835847]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-835847 not found
** /stderr **
I0927 00:15:13.144545 8355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 00:15:13.159735 8355 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bc5360}
I0927 00:15:13.159779 8355 network_create.go:124] attempt to create docker network addons-835847 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0927 00:15:13.159837 8355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-835847 addons-835847
I0927 00:15:13.231255 8355 network_create.go:108] docker network addons-835847 192.168.49.0/24 created
I0927 00:15:13.231298 8355 kic.go:121] calculated static IP "192.168.49.2" for the "addons-835847" container
I0927 00:15:13.231369 8355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0927 00:15:13.245619 8355 cli_runner.go:164] Run: docker volume create addons-835847 --label name.minikube.sigs.k8s.io=addons-835847 --label created_by.minikube.sigs.k8s.io=true
I0927 00:15:13.264827 8355 oci.go:103] Successfully created a docker volume addons-835847
I0927 00:15:13.264912 8355 cli_runner.go:164] Run: docker run --rm --name addons-835847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --entrypoint /usr/bin/test -v addons-835847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
I0927 00:15:15.390166 8355 cli_runner.go:217] Completed: docker run --rm --name addons-835847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --entrypoint /usr/bin/test -v addons-835847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.125216019s)
I0927 00:15:15.390192 8355 oci.go:107] Successfully prepared a docker volume addons-835847
I0927 00:15:15.390224 8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:15:15.390243 8355 kic.go:194] Starting extracting preloaded images to volume ...
I0927 00:15:15.390310 8355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-835847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
I0927 00:15:19.083766 8355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-2273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-835847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.693410155s)
I0927 00:15:19.083797 8355 kic.go:203] duration metric: took 3.693551141s to extract preloaded images to volume ...
W0927 00:15:19.083938 8355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0927 00:15:19.084087 8355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0927 00:15:19.147494 8355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-835847 --name addons-835847 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-835847 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-835847 --network addons-835847 --ip 192.168.49.2 --volume addons-835847:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
I0927 00:15:19.494651 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Running}}
I0927 00:15:19.517803 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:19.541883 8355 cli_runner.go:164] Run: docker exec addons-835847 stat /var/lib/dpkg/alternatives/iptables
I0927 00:15:19.608869 8355 oci.go:144] the created container "addons-835847" has a running status.
I0927 00:15:19.608901 8355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa...
I0927 00:15:20.378900 8355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0927 00:15:20.399032 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:20.416011 8355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0927 00:15:20.416029 8355 kic_runner.go:114] Args: [docker exec --privileged addons-835847 chown docker:docker /home/docker/.ssh/authorized_keys]
I0927 00:15:20.474353 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:20.498624 8355 machine.go:93] provisionDockerMachine start ...
I0927 00:15:20.498709 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:20.520787 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:20.521036 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:20.521049 8355 main.go:141] libmachine: About to run SSH command:
hostname
I0927 00:15:20.651255 8355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-835847
I0927 00:15:20.651275 8355 ubuntu.go:169] provisioning hostname "addons-835847"
I0927 00:15:20.651335 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:20.671336 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:20.671578 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:20.671598 8355 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-835847 && echo "addons-835847" | sudo tee /etc/hostname
I0927 00:15:20.815113 8355 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-835847
I0927 00:15:20.815194 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:20.831991 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:20.832270 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:20.832296 8355 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-835847' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-835847/g' /etc/hosts;
else
echo '127.0.1.1 addons-835847' | sudo tee -a /etc/hosts;
fi
fi
I0927 00:15:20.963827 8355 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0927 00:15:20.963861 8355 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-2273/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-2273/.minikube}
I0927 00:15:20.963882 8355 ubuntu.go:177] setting up certificates
I0927 00:15:20.963891 8355 provision.go:84] configureAuth start
I0927 00:15:20.963950 8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
I0927 00:15:20.981374 8355 provision.go:143] copyHostCerts
I0927 00:15:20.981455 8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/ca.pem (1078 bytes)
I0927 00:15:20.981577 8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/cert.pem (1123 bytes)
I0927 00:15:20.981642 8355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-2273/.minikube/key.pem (1679 bytes)
I0927 00:15:20.981694 8355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem org=jenkins.addons-835847 san=[127.0.0.1 192.168.49.2 addons-835847 localhost minikube]
I0927 00:15:21.603596 8355 provision.go:177] copyRemoteCerts
I0927 00:15:21.603668 8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0927 00:15:21.603714 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:21.620234 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:21.717264 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0927 00:15:21.742966 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0927 00:15:21.765958 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0927 00:15:21.787860 8355 provision.go:87] duration metric: took 823.956346ms to configureAuth
I0927 00:15:21.787927 8355 ubuntu.go:193] setting minikube options for container-runtime
I0927 00:15:21.788189 8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:15:21.788272 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:21.804406 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:21.804645 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:21.804664 8355 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0927 00:15:21.932584 8355 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0927 00:15:21.932603 8355 ubuntu.go:71] root file system type: overlay
I0927 00:15:21.932730 8355 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0927 00:15:21.932819 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:21.951301 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:21.951571 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:21.951649 8355 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0927 00:15:22.091606 8355 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0927 00:15:22.091728 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:22.108509 8355 main.go:141] libmachine: Using SSH client type: native
I0927 00:15:22.108758 8355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0927 00:15:22.108787 8355 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0927 00:15:22.877780 8355 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-20 11:39:18.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-27 00:15:22.085144394 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0927 00:15:22.877813 8355 machine.go:96] duration metric: took 2.37917045s to provisionDockerMachine
I0927 00:15:22.877825 8355 client.go:171] duration metric: took 10.365498477s to LocalClient.Create
I0927 00:15:22.877841 8355 start.go:167] duration metric: took 10.365568163s to libmachine.API.Create "addons-835847"
I0927 00:15:22.877854 8355 start.go:293] postStartSetup for "addons-835847" (driver="docker")
I0927 00:15:22.877866 8355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0927 00:15:22.877944 8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0927 00:15:22.878028 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:22.894452 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:22.988926 8355 ssh_runner.go:195] Run: cat /etc/os-release
I0927 00:15:22.991902 8355 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0927 00:15:22.991947 8355 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0927 00:15:22.991959 8355 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0927 00:15:22.991974 8355 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0927 00:15:22.991988 8355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-2273/.minikube/addons for local assets ...
I0927 00:15:22.992055 8355 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-2273/.minikube/files for local assets ...
I0927 00:15:22.992105 8355 start.go:296] duration metric: took 114.244293ms for postStartSetup
I0927 00:15:22.992441 8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
I0927 00:15:23.008730 8355 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/config.json ...
I0927 00:15:23.009018 8355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0927 00:15:23.009079 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:23.027435 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:23.116319 8355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0927 00:15:23.120221 8355 start.go:128] duration metric: took 10.61088474s to createHost
I0927 00:15:23.120242 8355 start.go:83] releasing machines lock for "addons-835847", held for 10.611013935s
I0927 00:15:23.120305 8355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-835847
I0927 00:15:23.135648 8355 ssh_runner.go:195] Run: cat /version.json
I0927 00:15:23.135683 8355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0927 00:15:23.135698 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:23.135749 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:23.156097 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:23.164186 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:23.376818 8355 ssh_runner.go:195] Run: systemctl --version
I0927 00:15:23.380864 8355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0927 00:15:23.384798 8355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0927 00:15:23.409268 8355 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0927 00:15:23.409353 8355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0927 00:15:23.440088 8355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0927 00:15:23.440113 8355 start.go:495] detecting cgroup driver to use...
I0927 00:15:23.440146 8355 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0927 00:15:23.440240 8355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0927 00:15:23.456177 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0927 00:15:23.465748 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0927 00:15:23.474925 8355 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0927 00:15:23.474990 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0927 00:15:23.484597 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0927 00:15:23.494252 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0927 00:15:23.504538 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0927 00:15:23.513912 8355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0927 00:15:23.522440 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0927 00:15:23.532036 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0927 00:15:23.541253 8355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0927 00:15:23.550601 8355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0927 00:15:23.558738 8355 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0927 00:15:23.558829 8355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0927 00:15:23.572102 8355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0927 00:15:23.580431 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:23.673704 8355 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0927 00:15:23.769213 8355 start.go:495] detecting cgroup driver to use...
I0927 00:15:23.769312 8355 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0927 00:15:23.769394 8355 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0927 00:15:23.782008 8355 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0927 00:15:23.782122 8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0927 00:15:23.794312 8355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0927 00:15:23.810771 8355 ssh_runner.go:195] Run: which cri-dockerd
I0927 00:15:23.817398 8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0927 00:15:23.828570 8355 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0927 00:15:23.848148 8355 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0927 00:15:23.952377 8355 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0927 00:15:24.058793 8355 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0927 00:15:24.058967 8355 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0927 00:15:24.085851 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:24.177468 8355 ssh_runner.go:195] Run: sudo systemctl restart docker
I0927 00:15:24.436627 8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0927 00:15:24.448603 8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0927 00:15:24.460983 8355 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0927 00:15:24.552134 8355 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0927 00:15:24.640823 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:24.734979 8355 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0927 00:15:24.749039 8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0927 00:15:24.759996 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:24.848289 8355 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0927 00:15:24.928702 8355 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0927 00:15:24.928861 8355 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0927 00:15:24.933380 8355 start.go:563] Will wait 60s for crictl version
I0927 00:15:24.933504 8355 ssh_runner.go:195] Run: which crictl
I0927 00:15:24.940159 8355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0927 00:15:24.975140 8355 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0927 00:15:24.975253 8355 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0927 00:15:24.998680 8355 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0927 00:15:25.023194 8355 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0927 00:15:25.023305 8355 cli_runner.go:164] Run: docker network inspect addons-835847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 00:15:25.039606 8355 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0927 00:15:25.043444 8355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0927 00:15:25.054897 8355 kubeadm.go:883] updating cluster {Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0927 00:15:25.055019 8355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:15:25.055081 8355 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0927 00:15:25.073844 8355 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0927 00:15:25.073868 8355 docker.go:615] Images already preloaded, skipping extraction
I0927 00:15:25.073937 8355 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0927 00:15:25.092646 8355 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0927 00:15:25.092675 8355 cache_images.go:84] Images are preloaded, skipping loading
I0927 00:15:25.092685 8355 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0927 00:15:25.092794 8355 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-835847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0927 00:15:25.092863 8355 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0927 00:15:25.135999 8355 cni.go:84] Creating CNI manager for ""
I0927 00:15:25.136029 8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:15:25.136040 8355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0927 00:15:25.136060 8355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-835847 NodeName:addons-835847 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0927 00:15:25.136233 8355 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-835847"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0927 00:15:25.136302 8355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0927 00:15:25.144782 8355 binaries.go:44] Found k8s binaries, skipping transfer
I0927 00:15:25.144850 8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0927 00:15:25.156871 8355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0927 00:15:25.174562 8355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0927 00:15:25.192036 8355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0927 00:15:25.209382 8355 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0927 00:15:25.212648 8355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0927 00:15:25.223039 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:25.307839 8355 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0927 00:15:25.321992 8355 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847 for IP: 192.168.49.2
I0927 00:15:25.322017 8355 certs.go:194] generating shared ca certs ...
I0927 00:15:25.322034 8355 certs.go:226] acquiring lock for ca certs: {Name:mk6b469cb21598aa598a7ad76cb0e9fff426f760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:25.322154 8355 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key
I0927 00:15:25.821501 8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt ...
I0927 00:15:25.821536 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt: {Name:mk5a0578057d437dd3ec15b1fc2dc320142c3756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:25.821743 8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key ...
I0927 00:15:25.821759 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key: {Name:mk121f6d140b9ff66f0fb5942b7fb7d03b6270c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:25.821870 8355 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key
I0927 00:15:26.348212 8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt ...
I0927 00:15:26.348247 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt: {Name:mkcab7a0550e3ae89f0be7bbad3b91f0d1f678eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.348432 8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key ...
I0927 00:15:26.348445 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key: {Name:mk8e63bf4e68b1ab3013424d2ba114c292acb726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.348532 8355 certs.go:256] generating profile certs ...
I0927 00:15:26.348590 8355 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key
I0927 00:15:26.348609 8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt with IP's: []
I0927 00:15:26.672714 8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt ...
I0927 00:15:26.672744 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.crt: {Name:mkbd4cd9b6e96659d61742c652c95d80a48d60e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.672921 8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key ...
I0927 00:15:26.672933 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/client.key: {Name:mk637c405f925463968a47027001c25855825222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.673019 8355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb
I0927 00:15:26.673039 8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0927 00:15:26.999445 8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb ...
I0927 00:15:26.999476 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb: {Name:mkaf8fa885a27d87fe23843620326105754da1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.999655 8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb ...
I0927 00:15:26.999669 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb: {Name:mk90ed01a36b135715259a523b426ac426ca466d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.999744 8355 certs.go:381] copying /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt.1d836adb -> /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt
I0927 00:15:26.999825 8355 certs.go:385] copying /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key.1d836adb -> /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key
I0927 00:15:26.999880 8355 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key
I0927 00:15:26.999901 8355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt with IP's: []
I0927 00:15:27.841185 8355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt ...
I0927 00:15:27.841218 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt: {Name:mk43215f1566b51f2de5f848457a9b34b2a4d67d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:27.841402 8355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key ...
I0927 00:15:27.841414 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key: {Name:mka499bf621e2c6b6397d3b54999e74c6e838c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:27.841619 8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca-key.pem (1675 bytes)
I0927 00:15:27.841660 8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/ca.pem (1078 bytes)
I0927 00:15:27.841689 8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/cert.pem (1123 bytes)
I0927 00:15:27.841716 8355 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-2273/.minikube/certs/key.pem (1679 bytes)
I0927 00:15:27.842326 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0927 00:15:27.865324 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0927 00:15:27.891742 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0927 00:15:27.914572 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0927 00:15:27.937337 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0927 00:15:27.959420 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0927 00:15:27.982205 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0927 00:15:28.004890 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/profiles/addons-835847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0927 00:15:28.028728 8355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-2273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0927 00:15:28.052053 8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0927 00:15:28.069770 8355 ssh_runner.go:195] Run: openssl version
I0927 00:15:28.075085 8355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0927 00:15:28.084122 8355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:28.087230 8355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:28.087308 8355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:28.093959 8355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0927 00:15:28.102764 8355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0927 00:15:28.105702 8355 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0927 00:15:28.105749 8355 kubeadm.go:392] StartCluster: {Name:addons-835847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-835847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0927 00:15:28.105873 8355 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0927 00:15:28.123626 8355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0927 00:15:28.133421 8355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0927 00:15:28.142011 8355 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0927 00:15:28.142080 8355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0927 00:15:28.150221 8355 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0927 00:15:28.150241 8355 kubeadm.go:157] found existing configuration files:
I0927 00:15:28.150292 8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0927 00:15:28.158417 8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0927 00:15:28.158499 8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0927 00:15:28.166662 8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0927 00:15:28.174851 8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0927 00:15:28.174916 8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0927 00:15:28.183074 8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0927 00:15:28.191328 8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0927 00:15:28.191411 8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0927 00:15:28.199895 8355 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0927 00:15:28.208016 8355 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0927 00:15:28.208091 8355 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0927 00:15:28.215691 8355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0927 00:15:28.258071 8355 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0927 00:15:28.258140 8355 kubeadm.go:310] [preflight] Running pre-flight checks
I0927 00:15:28.281314 8355 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0927 00:15:28.281384 8355 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0927 00:15:28.281425 8355 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0927 00:15:28.281475 8355 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0927 00:15:28.281527 8355 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0927 00:15:28.281577 8355 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0927 00:15:28.281628 8355 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0927 00:15:28.281680 8355 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0927 00:15:28.281738 8355 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0927 00:15:28.281787 8355 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0927 00:15:28.281838 8355 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0927 00:15:28.281888 8355 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0927 00:15:28.352824 8355 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0927 00:15:28.352938 8355 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0927 00:15:28.353035 8355 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0927 00:15:28.365127 8355 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0927 00:15:28.369171 8355 out.go:235] - Generating certificates and keys ...
I0927 00:15:28.369274 8355 kubeadm.go:310] [certs] Using existing ca certificate authority
I0927 00:15:28.369342 8355 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0927 00:15:28.791729 8355 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0927 00:15:29.737295 8355 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0927 00:15:30.229025 8355 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0927 00:15:30.500175 8355 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0927 00:15:30.765590 8355 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0927 00:15:30.765913 8355 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-835847 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0927 00:15:31.778248 8355 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0927 00:15:31.778522 8355 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-835847 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0927 00:15:32.205672 8355 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0927 00:15:32.925869 8355 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0927 00:15:33.313122 8355 kubeadm.go:310] [certs] Generating "sa" key and public key
I0927 00:15:33.313417 8355 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0927 00:15:33.477185 8355 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0927 00:15:34.153424 8355 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0927 00:15:34.654785 8355 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0927 00:15:34.844738 8355 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0927 00:15:35.408980 8355 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0927 00:15:35.409698 8355 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0927 00:15:35.413150 8355 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0927 00:15:35.415475 8355 out.go:235] - Booting up control plane ...
I0927 00:15:35.415597 8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0927 00:15:35.415721 8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0927 00:15:35.416997 8355 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0927 00:15:35.430520 8355 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0927 00:15:35.436671 8355 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0927 00:15:35.436726 8355 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0927 00:15:35.534311 8355 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0927 00:15:35.534436 8355 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0927 00:15:36.535815 8355 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001605638s
I0927 00:15:36.535906 8355 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0927 00:15:43.036878 8355 kubeadm.go:310] [api-check] The API server is healthy after 6.501197753s
I0927 00:15:43.058665 8355 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0927 00:15:43.073642 8355 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0927 00:15:43.099957 8355 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0927 00:15:43.100199 8355 kubeadm.go:310] [mark-control-plane] Marking the node addons-835847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0927 00:15:43.109954 8355 kubeadm.go:310] [bootstrap-token] Using token: s902bs.tf3jjmvfz7uwqdvh
I0927 00:15:43.111854 8355 out.go:235] - Configuring RBAC rules ...
I0927 00:15:43.112000 8355 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0927 00:15:43.116635 8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0927 00:15:43.125976 8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0927 00:15:43.129288 8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0927 00:15:43.132730 8355 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0927 00:15:43.135965 8355 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0927 00:15:43.444854 8355 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0927 00:15:43.872202 8355 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0927 00:15:44.444681 8355 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0927 00:15:44.445746 8355 kubeadm.go:310]
I0927 00:15:44.445815 8355 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0927 00:15:44.445822 8355 kubeadm.go:310]
I0927 00:15:44.445897 8355 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0927 00:15:44.445902 8355 kubeadm.go:310]
I0927 00:15:44.445926 8355 kubeadm.go:310] mkdir -p $HOME/.kube
I0927 00:15:44.445984 8355 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0927 00:15:44.446033 8355 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0927 00:15:44.446038 8355 kubeadm.go:310]
I0927 00:15:44.446091 8355 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0927 00:15:44.446095 8355 kubeadm.go:310]
I0927 00:15:44.446141 8355 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0927 00:15:44.446146 8355 kubeadm.go:310]
I0927 00:15:44.446197 8355 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0927 00:15:44.446271 8355 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0927 00:15:44.446338 8355 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0927 00:15:44.446342 8355 kubeadm.go:310]
I0927 00:15:44.446424 8355 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0927 00:15:44.446500 8355 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0927 00:15:44.446504 8355 kubeadm.go:310]
I0927 00:15:44.446587 8355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s902bs.tf3jjmvfz7uwqdvh \
I0927 00:15:44.446691 8355 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cd6a97da01c3c67156170d76cada7ca61301e3d64f415f6dbfb2beeb22c641c2 \
I0927 00:15:44.446711 8355 kubeadm.go:310] --control-plane
I0927 00:15:44.446715 8355 kubeadm.go:310]
I0927 00:15:44.446799 8355 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0927 00:15:44.446805 8355 kubeadm.go:310]
I0927 00:15:44.447099 8355 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s902bs.tf3jjmvfz7uwqdvh \
I0927 00:15:44.447219 8355 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:cd6a97da01c3c67156170d76cada7ca61301e3d64f415f6dbfb2beeb22c641c2
I0927 00:15:44.450712 8355 kubeadm.go:310] W0927 00:15:28.253273 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0927 00:15:44.451015 8355 kubeadm.go:310] W0927 00:15:28.254096 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0927 00:15:44.451231 8355 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0927 00:15:44.451339 8355 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0927 00:15:44.451360 8355 cni.go:84] Creating CNI manager for ""
I0927 00:15:44.451379 8355 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:15:44.455522 8355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0927 00:15:44.457526 8355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0927 00:15:44.465851 8355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0927 00:15:44.484459 8355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0927 00:15:44.484584 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:44.484658 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-835847 minikube.k8s.io/updated_at=2024_09_27T00_15_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-835847 minikube.k8s.io/primary=true
I0927 00:15:44.620967 8355 ops.go:34] apiserver oom_adj: -16
I0927 00:15:44.623033 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:45.123120 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:45.623700 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:46.123549 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:46.623991 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:47.123671 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:47.623809 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:48.124006 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:48.623911 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:49.123090 8355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:49.217489 8355 kubeadm.go:1113] duration metric: took 4.732949994s to wait for elevateKubeSystemPrivileges
I0927 00:15:49.217519 8355 kubeadm.go:394] duration metric: took 21.111773778s to StartCluster
I0927 00:15:49.217536 8355 settings.go:142] acquiring lock: {Name:mk9e86eff3579e8eaf68f36246430af37e38da50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:49.217645 8355 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19711-2273/kubeconfig
I0927 00:15:49.218091 8355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-2273/kubeconfig: {Name:mk73f0586b74afb137afdc7b8bae894b77929339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:49.218303 8355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0927 00:15:49.218432 8355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0927 00:15:49.218671 8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:15:49.218707 8355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0927 00:15:49.218782 8355 addons.go:69] Setting yakd=true in profile "addons-835847"
I0927 00:15:49.218802 8355 addons.go:234] Setting addon yakd=true in "addons-835847"
I0927 00:15:49.218824 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.219313 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.219655 8355 addons.go:69] Setting metrics-server=true in profile "addons-835847"
I0927 00:15:49.219679 8355 addons.go:234] Setting addon metrics-server=true in "addons-835847"
I0927 00:15:49.219706 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.220214 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222270 8355 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-835847"
I0927 00:15:49.222304 8355 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-835847"
I0927 00:15:49.222353 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.222359 8355 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-835847"
I0927 00:15:49.222426 8355 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-835847"
I0927 00:15:49.222472 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.222351 8355 addons.go:69] Setting cloud-spanner=true in profile "addons-835847"
I0927 00:15:49.224263 8355 addons.go:234] Setting addon cloud-spanner=true in "addons-835847"
I0927 00:15:49.224293 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.224716 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222521 8355 addons.go:69] Setting registry=true in profile "addons-835847"
I0927 00:15:49.225245 8355 addons.go:234] Setting addon registry=true in "addons-835847"
I0927 00:15:49.225271 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.225681 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.229988 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222526 8355 addons.go:69] Setting storage-provisioner=true in profile "addons-835847"
I0927 00:15:49.231247 8355 addons.go:234] Setting addon storage-provisioner=true in "addons-835847"
I0927 00:15:49.231321 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.231807 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222531 8355 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-835847"
I0927 00:15:49.243437 8355 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-835847"
I0927 00:15:49.243825 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222535 8355 addons.go:69] Setting volcano=true in profile "addons-835847"
I0927 00:15:49.247991 8355 addons.go:234] Setting addon volcano=true in "addons-835847"
I0927 00:15:49.248104 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.248635 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222538 8355 addons.go:69] Setting volumesnapshots=true in profile "addons-835847"
I0927 00:15:49.256202 8355 addons.go:234] Setting addon volumesnapshots=true in "addons-835847"
I0927 00:15:49.256274 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.256785 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.222585 8355 out.go:177] * Verifying Kubernetes components...
I0927 00:15:49.222599 8355 addons.go:69] Setting default-storageclass=true in profile "addons-835847"
I0927 00:15:49.222608 8355 addons.go:69] Setting gcp-auth=true in profile "addons-835847"
I0927 00:15:49.222612 8355 addons.go:69] Setting ingress=true in profile "addons-835847"
I0927 00:15:49.222616 8355 addons.go:69] Setting ingress-dns=true in profile "addons-835847"
I0927 00:15:49.222619 8355 addons.go:69] Setting inspektor-gadget=true in profile "addons-835847"
I0927 00:15:49.223847 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.329948 8355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0927 00:15:49.330072 8355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-835847"
I0927 00:15:49.330408 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.350114 8355 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0927 00:15:49.351911 8355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0927 00:15:49.351939 8355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0927 00:15:49.352019 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.354730 8355 mustload.go:65] Loading cluster: addons-835847
I0927 00:15:49.354923 8355 addons.go:234] Setting addon ingress=true in "addons-835847"
I0927 00:15:49.355212 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.354937 8355 addons.go:234] Setting addon ingress-dns=true in "addons-835847"
I0927 00:15:49.360553 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.360992 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.354944 8355 addons.go:234] Setting addon inspektor-gadget=true in "addons-835847"
I0927 00:15:49.373717 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.374196 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.380801 8355 config.go:182] Loaded profile config "addons-835847": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:15:49.381075 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.394128 8355 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0927 00:15:49.398993 8355 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0927 00:15:49.399016 8355 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0927 00:15:49.399082 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.437857 8355 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
I0927 00:15:49.438118 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.438451 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0927 00:15:49.438621 8355 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0927 00:15:49.442509 8355 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0927 00:15:49.446301 8355 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0927 00:15:49.446372 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0927 00:15:49.446470 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.449372 8355 out.go:177] - Using image docker.io/registry:2.8.3
I0927 00:15:49.464901 8355 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0927 00:15:49.472290 8355 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0927 00:15:49.483657 8355 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0927 00:15:49.500182 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I0927 00:15:49.473059 8355 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0927 00:15:49.488427 8355 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0927 00:15:49.500343 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0927 00:15:49.489831 8355 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-835847"
I0927 00:15:49.500377 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.500436 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.500987 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.517789 8355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:49.517813 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0927 00:15:49.517876 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.500257 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.535902 8355 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0927 00:15:49.536534 8355 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0927 00:15:49.538001 8355 addons.go:234] Setting addon default-storageclass=true in "addons-835847"
I0927 00:15:49.538035 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.538439 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:49.553931 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0927 00:15:49.554282 8355 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0927 00:15:49.554300 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0927 00:15:49.554361 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.572584 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0927 00:15:49.572607 8355 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0927 00:15:49.572706 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.578453 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.583290 8355 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0927 00:15:49.598353 8355 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0927 00:15:49.600607 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0927 00:15:49.602630 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0927 00:15:49.605573 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0927 00:15:49.608775 8355 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0927 00:15:49.611417 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0927 00:15:49.611441 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0927 00:15:49.611517 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.620367 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.621485 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.622938 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:49.650248 8355 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0927 00:15:49.652375 8355 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0927 00:15:49.656232 8355 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0927 00:15:49.656261 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0927 00:15:49.656329 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.656476 8355 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0927 00:15:49.656550 8355 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0927 00:15:49.656586 8355 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0927 00:15:49.656651 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.666487 8355 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0927 00:15:49.688256 8355 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0927 00:15:49.690311 8355 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0927 00:15:49.690334 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0927 00:15:49.690395 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.696876 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.710508 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.725746 8355 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0927 00:15:49.728263 8355 out.go:177] - Using image docker.io/busybox:stable
I0927 00:15:49.733826 8355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0927 00:15:49.733850 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0927 00:15:49.733916 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.748567 8355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0927 00:15:49.785551 8355 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0927 00:15:49.797247 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.805058 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.823714 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.825436 8355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:49.825456 8355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0927 00:15:49.825516 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:49.838148 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.841438 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.842171 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.858504 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.873954 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:49.890019 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:50.505701 8355 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0927 00:15:50.505765 8355 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0927 00:15:50.732812 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0927 00:15:50.733466 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0927 00:15:50.742111 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0927 00:15:50.781207 8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0927 00:15:50.781280 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0927 00:15:50.847297 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0927 00:15:50.892400 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:50.953993 8355 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0927 00:15:50.954067 8355 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0927 00:15:50.963792 8355 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0927 00:15:50.963857 8355 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0927 00:15:50.995621 8355 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0927 00:15:50.995684 8355 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0927 00:15:51.004924 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0927 00:15:51.008848 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0927 00:15:51.039095 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0927 00:15:51.039168 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0927 00:15:51.148311 8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0927 00:15:51.148376 8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0927 00:15:51.213162 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:51.216366 8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0927 00:15:51.216422 8355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0927 00:15:51.249541 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0927 00:15:51.249616 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0927 00:15:51.273897 8355 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0927 00:15:51.273964 8355 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0927 00:15:51.323569 8355 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0927 00:15:51.323642 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0927 00:15:51.335786 8355 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0927 00:15:51.335859 8355 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0927 00:15:51.420665 8355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0927 00:15:51.420738 8355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0927 00:15:51.500836 8355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0927 00:15:51.500907 8355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0927 00:15:51.543095 8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0927 00:15:51.543160 8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0927 00:15:51.588879 8355 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0927 00:15:51.588949 8355 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0927 00:15:51.592689 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0927 00:15:51.592750 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0927 00:15:51.615380 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0927 00:15:51.701327 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0927 00:15:51.710692 8355 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0927 00:15:51.710756 8355 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0927 00:15:51.801629 8355 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0927 00:15:51.801698 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0927 00:15:51.827115 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0927 00:15:51.827145 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0927 00:15:51.872650 8355 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0927 00:15:51.872676 8355 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0927 00:15:51.969270 8355 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.22066858s)
I0927 00:15:51.969302 8355 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0927 00:15:51.970327 8355 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.184754644s)
I0927 00:15:51.971030 8355 node_ready.go:35] waiting up to 6m0s for node "addons-835847" to be "Ready" ...
I0927 00:15:51.977817 8355 node_ready.go:49] node "addons-835847" has status "Ready":"True"
I0927 00:15:51.977844 8355 node_ready.go:38] duration metric: took 6.789592ms for node "addons-835847" to be "Ready" ...
I0927 00:15:51.977855 8355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0927 00:15:51.995041 8355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace to be "Ready" ...
I0927 00:15:52.046176 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0927 00:15:52.152336 8355 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0927 00:15:52.152370 8355 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0927 00:15:52.270946 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0927 00:15:52.270978 8355 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0927 00:15:52.294167 8355 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0927 00:15:52.294206 8355 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0927 00:15:52.472947 8355 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-835847" context rescaled to 1 replicas
I0927 00:15:52.493497 8355 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0927 00:15:52.493518 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0927 00:15:52.511718 8355 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:52.511793 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0927 00:15:52.534147 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0927 00:15:52.534178 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0927 00:15:52.664040 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:52.761843 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0927 00:15:52.761875 8355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0927 00:15:52.881054 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0927 00:15:53.046930 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0927 00:15:53.046956 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0927 00:15:53.286969 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0927 00:15:53.287001 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0927 00:15:53.603223 8355 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0927 00:15:53.603289 8355 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0927 00:15:53.855333 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.12243669s)
I0927 00:15:53.855408 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.121895145s)
I0927 00:15:54.002494 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:54.712529 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0927 00:15:56.004941 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:56.634923 8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0927 00:15:56.635001 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:56.662590 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:57.597681 8355 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0927 00:15:57.745871 8355 addons.go:234] Setting addon gcp-auth=true in "addons-835847"
I0927 00:15:57.745924 8355 host.go:66] Checking if "addons-835847" exists ...
I0927 00:15:57.746398 8355 cli_runner.go:164] Run: docker container inspect addons-835847 --format={{.State.Status}}
I0927 00:15:57.768690 8355 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0927 00:15:57.768746 8355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-835847
I0927 00:15:57.831910 8355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19711-2273/.minikube/machines/addons-835847/id_rsa Username:docker}
I0927 00:15:58.014887 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:59.375658 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.528276525s)
I0927 00:15:59.375761 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.483294097s)
I0927 00:15:59.376025 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.37103202s)
I0927 00:15:59.376143 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.633940999s)
I0927 00:15:59.376177 8355 addons.go:475] Verifying addon ingress=true in "addons-835847"
I0927 00:15:59.380131 8355 out.go:177] * Verifying ingress addon...
I0927 00:15:59.383229 8355 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0927 00:15:59.391660 8355 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0927 00:15:59.391721 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:15:59.890827 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:00.387921 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:00.564946 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:00.927767 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:01.403359 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:01.873952 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.660709927s)
I0927 00:16:01.874020 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.258561087s)
I0927 00:16:01.874036 8355 addons.go:475] Verifying addon registry=true in "addons-835847"
I0927 00:16:01.874165 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.865250383s)
I0927 00:16:01.874708 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.173293146s)
I0927 00:16:01.874737 8355 addons.go:475] Verifying addon metrics-server=true in "addons-835847"
I0927 00:16:01.874821 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.828591123s)
I0927 00:16:01.875244 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.211162886s)
W0927 00:16:01.875285 8355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0927 00:16:01.875304 8355 retry.go:31] will retry after 264.137158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0927 00:16:01.875497 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.994399315s)
I0927 00:16:01.880045 8355 out.go:177] * Verifying registry addon...
I0927 00:16:01.880175 8355 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-835847 service yakd-dashboard -n yakd-dashboard
I0927 00:16:01.883090 8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0927 00:16:01.923139 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:01.923741 8355 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0927 00:16:01.923802 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.023560 8355 pod_ready.go:98] pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:15:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:15:50 +0000 UTC,FinishedAt:2024-09-27 00:16:00 +0000 UTC,ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc Started:0x4001d94e80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x400211dd00} {Name:kube-api-access-skppz MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x400211dd10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0927 00:16:02.023639 8355 pod_ready.go:82] duration metric: took 10.028563935s for pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace to be "Ready" ...
E0927 00:16:02.023664 8355 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-p4pzt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:16:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:15:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-27 00:15:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 00:15:50 +0000 UTC,FinishedAt:2024-09-27 00:16:00 +0000 UTC,ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bb32fcfff2e2595d2d264bc6e83297dad358150ba702da16af08fbf72345befc Started:0x4001d94e80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x400211dd00} {Name:kube-api-access-skppz MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x400211dd10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0927 00:16:02.023688 8355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace to be "Ready" ...
I0927 00:16:02.140344 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:16:02.425985 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.426960 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:02.913892 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.915066 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:03.004802 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.292222471s)
I0927 00:16:03.004836 8355 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-835847"
I0927 00:16:03.005036 8355 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.236324849s)
I0927 00:16:03.008588 8355 out.go:177] * Verifying csi-hostpath-driver addon...
I0927 00:16:03.008676 8355 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0927 00:16:03.011501 8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:16:03.013921 8355 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0927 00:16:03.015859 8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0927 00:16:03.015887 8355 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0927 00:16:03.069592 8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0927 00:16:03.069663 8355 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0927 00:16:03.092488 8355 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:16:03.092576 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:03.204946 8355 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0927 00:16:03.205015 8355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0927 00:16:03.247751 8355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0927 00:16:03.394435 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:03.394999 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:03.516052 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:03.888386 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:03.889463 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:04.016793 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:04.029898 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:04.390853 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:04.392044 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:04.454547 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.314108564s)
I0927 00:16:04.516881 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:04.731910 8355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.484125546s)
I0927 00:16:04.735070 8355 addons.go:475] Verifying addon gcp-auth=true in "addons-835847"
I0927 00:16:04.738677 8355 out.go:177] * Verifying gcp-auth addon...
I0927 00:16:04.741744 8355 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0927 00:16:04.745200 8355 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0927 00:16:04.888474 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:04.889138 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:05.017970 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:05.387705 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:05.388347 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:05.517132 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:05.888919 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:05.889436 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:06.016224 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:06.031876 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:06.387664 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:06.388649 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:06.515824 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:06.887444 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:06.888274 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:07.015920 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:07.388808 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:07.389405 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:07.516535 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:07.888218 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:07.888737 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:08.017091 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:08.387960 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:08.388881 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:08.515958 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:08.530648 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:08.887089 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:08.888504 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:09.016049 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:09.386673 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:09.387587 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:09.515825 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:09.887951 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:09.888308 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:10.016648 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:10.388793 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:10.390453 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:10.516484 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:10.887655 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:10.888325 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:11.016131 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:11.029965 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:11.388112 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:11.388615 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:11.516681 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:11.887728 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:11.888826 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:12.016301 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:12.386725 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:12.388630 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:12.516208 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:12.890979 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:12.892213 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:13.016588 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:13.032922 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:13.387947 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:13.388584 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:13.516541 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:13.889252 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:13.891793 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:14.016990 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:14.389533 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:14.390831 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:14.516718 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:14.894676 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:14.895514 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:15.017302 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:15.387859 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:15.389071 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:15.516508 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:15.529584 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:15.887101 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:15.888646 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:16.018777 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:16.389290 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:16.390716 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:16.516332 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:16.887264 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:16.889460 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:17.015774 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:17.387963 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:17.388683 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:17.517094 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:17.531195 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:17.888544 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:17.889428 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:18.016613 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:18.388543 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:18.389154 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:18.516521 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:18.902410 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:18.903841 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:19.018651 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:19.399223 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:19.400308 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:19.516727 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:19.887449 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:19.889276 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:20.017480 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:20.031417 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:20.388517 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:20.388958 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:20.516495 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:20.889394 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:20.890731 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:21.016460 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:21.387651 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:21.388807 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:21.517513 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:21.887693 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:21.889865 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:22.016402 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:22.388667 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:22.389687 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:22.517025 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:22.530760 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:22.889320 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:22.889887 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:23.017907 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:23.388501 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:23.389463 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:23.515642 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:23.887643 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:23.889660 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:24.021051 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:24.389470 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:24.390423 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:24.516192 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:24.887227 8355 kapi.go:107] duration metric: took 23.004133499s to wait for kubernetes.io/minikube-addons=registry ...
I0927 00:16:24.888650 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:25.016320 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:25.029836 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:25.387938 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:25.517314 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:25.887948 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:26.017226 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:26.388451 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:26.516752 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:26.890491 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:27.016106 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:27.387906 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:27.516767 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:27.531904 8355 pod_ready.go:103] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"False"
I0927 00:16:27.890543 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:28.020693 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:28.039434 8355 pod_ready.go:93] pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.039462 8355 pod_ready.go:82] duration metric: took 26.015727515s for pod "coredns-7c65d6cfc9-tvzhv" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.039474 8355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.044984 8355 pod_ready.go:93] pod "etcd-addons-835847" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.045007 8355 pod_ready.go:82] duration metric: took 5.52514ms for pod "etcd-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.045019 8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.052060 8355 pod_ready.go:93] pod "kube-apiserver-addons-835847" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.052201 8355 pod_ready.go:82] duration metric: took 7.170344ms for pod "kube-apiserver-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.052215 8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.063830 8355 pod_ready.go:93] pod "kube-controller-manager-addons-835847" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.063857 8355 pod_ready.go:82] duration metric: took 11.632529ms for pod "kube-controller-manager-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.063869 8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sh55m" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.082626 8355 pod_ready.go:93] pod "kube-proxy-sh55m" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.082697 8355 pod_ready.go:82] duration metric: took 18.819175ms for pod "kube-proxy-sh55m" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.082723 8355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.388236 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:28.428590 8355 pod_ready.go:93] pod "kube-scheduler-addons-835847" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.428654 8355 pod_ready.go:82] duration metric: took 345.90873ms for pod "kube-scheduler-addons-835847" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.428680 8355 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.516192 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:28.828201 8355 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace has status "Ready":"True"
I0927 00:16:28.828342 8355 pod_ready.go:82] duration metric: took 399.639998ms for pod "nvidia-device-plugin-daemonset-pxf2p" in "kube-system" namespace to be "Ready" ...
I0927 00:16:28.828377 8355 pod_ready.go:39] duration metric: took 36.850509663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0927 00:16:28.828486 8355 api_server.go:52] waiting for apiserver process to appear ...
I0927 00:16:28.828652 8355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:16:28.846957 8355 api_server.go:72] duration metric: took 39.628618173s to wait for apiserver process to appear ...
I0927 00:16:28.847032 8355 api_server.go:88] waiting for apiserver healthz status ...
I0927 00:16:28.847071 8355 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0927 00:16:28.855046 8355 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0927 00:16:28.856021 8355 api_server.go:141] control plane version: v1.31.1
I0927 00:16:28.856099 8355 api_server.go:131] duration metric: took 9.045418ms to wait for apiserver health ...
I0927 00:16:28.856125 8355 system_pods.go:43] waiting for kube-system pods to appear ...
I0927 00:16:28.887823 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:29.016445 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:29.036384 8355 system_pods.go:59] 17 kube-system pods found
I0927 00:16:29.036420 8355 system_pods.go:61] "coredns-7c65d6cfc9-tvzhv" [a2efa460-a57a-45eb-8364-cf85abad82cf] Running
I0927 00:16:29.036429 8355 system_pods.go:61] "csi-hostpath-attacher-0" [c3869238-f637-430d-b854-92bd76cc44fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0927 00:16:29.036437 8355 system_pods.go:61] "csi-hostpath-resizer-0" [4b297f3d-aeaa-4a5d-8b74-f7174019b812] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0927 00:16:29.036477 8355 system_pods.go:61] "csi-hostpathplugin-jmcgj" [277b306d-cb93-419b-8cee-55a5570d009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0927 00:16:29.036488 8355 system_pods.go:61] "etcd-addons-835847" [79b5e8ee-117c-48e1-baab-fe318fca930e] Running
I0927 00:16:29.036493 8355 system_pods.go:61] "kube-apiserver-addons-835847" [579d15f4-818a-4fbc-a6db-23d34aeffea8] Running
I0927 00:16:29.036498 8355 system_pods.go:61] "kube-controller-manager-addons-835847" [ee2d4237-f212-4068-ba76-af07caa6a2fa] Running
I0927 00:16:29.036521 8355 system_pods.go:61] "kube-ingress-dns-minikube" [afb9c90d-de1d-4d41-a089-c58d2ad953f4] Running
I0927 00:16:29.036526 8355 system_pods.go:61] "kube-proxy-sh55m" [d5ff899a-b75e-429d-bc03-d269a2a48ce2] Running
I0927 00:16:29.036530 8355 system_pods.go:61] "kube-scheduler-addons-835847" [1d263758-8b84-4b6c-995e-7b727372026c] Running
I0927 00:16:29.036537 8355 system_pods.go:61] "metrics-server-84c5f94fbc-5ck7c" [b1561527-6ede-4c7d-89b0-dc3e89f14879] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0927 00:16:29.036545 8355 system_pods.go:61] "nvidia-device-plugin-daemonset-pxf2p" [29a178e4-9317-46ca-b2a2-4a1fa8ca2860] Running
I0927 00:16:29.036551 8355 system_pods.go:61] "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
I0927 00:16:29.036561 8355 system_pods.go:61] "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
I0927 00:16:29.036570 8355 system_pods.go:61] "snapshot-controller-56fcc65765-gdzpx" [f2bb6dd6-0f43-437c-a5b9-d91f084332f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:16:29.036583 8355 system_pods.go:61] "snapshot-controller-56fcc65765-jjm9x" [6aa4e47b-5902-4da5-a4a9-f6ccd932944c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:16:29.036588 8355 system_pods.go:61] "storage-provisioner" [17e1dac7-0278-4861-bbf2-9b70936db1b4] Running
I0927 00:16:29.036599 8355 system_pods.go:74] duration metric: took 180.446819ms to wait for pod list to return data ...
I0927 00:16:29.036606 8355 default_sa.go:34] waiting for default service account to be created ...
I0927 00:16:29.229001 8355 default_sa.go:45] found service account: "default"
I0927 00:16:29.229027 8355 default_sa.go:55] duration metric: took 192.414136ms for default service account to be created ...
I0927 00:16:29.229040 8355 system_pods.go:116] waiting for k8s-apps to be running ...
I0927 00:16:29.388055 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:29.436020 8355 system_pods.go:86] 17 kube-system pods found
I0927 00:16:29.436052 8355 system_pods.go:89] "coredns-7c65d6cfc9-tvzhv" [a2efa460-a57a-45eb-8364-cf85abad82cf] Running
I0927 00:16:29.436074 8355 system_pods.go:89] "csi-hostpath-attacher-0" [c3869238-f637-430d-b854-92bd76cc44fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0927 00:16:29.436082 8355 system_pods.go:89] "csi-hostpath-resizer-0" [4b297f3d-aeaa-4a5d-8b74-f7174019b812] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0927 00:16:29.436090 8355 system_pods.go:89] "csi-hostpathplugin-jmcgj" [277b306d-cb93-419b-8cee-55a5570d009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0927 00:16:29.436103 8355 system_pods.go:89] "etcd-addons-835847" [79b5e8ee-117c-48e1-baab-fe318fca930e] Running
I0927 00:16:29.436112 8355 system_pods.go:89] "kube-apiserver-addons-835847" [579d15f4-818a-4fbc-a6db-23d34aeffea8] Running
I0927 00:16:29.436117 8355 system_pods.go:89] "kube-controller-manager-addons-835847" [ee2d4237-f212-4068-ba76-af07caa6a2fa] Running
I0927 00:16:29.436137 8355 system_pods.go:89] "kube-ingress-dns-minikube" [afb9c90d-de1d-4d41-a089-c58d2ad953f4] Running
I0927 00:16:29.436141 8355 system_pods.go:89] "kube-proxy-sh55m" [d5ff899a-b75e-429d-bc03-d269a2a48ce2] Running
I0927 00:16:29.436145 8355 system_pods.go:89] "kube-scheduler-addons-835847" [1d263758-8b84-4b6c-995e-7b727372026c] Running
I0927 00:16:29.436159 8355 system_pods.go:89] "metrics-server-84c5f94fbc-5ck7c" [b1561527-6ede-4c7d-89b0-dc3e89f14879] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0927 00:16:29.436167 8355 system_pods.go:89] "nvidia-device-plugin-daemonset-pxf2p" [29a178e4-9317-46ca-b2a2-4a1fa8ca2860] Running
I0927 00:16:29.436186 8355 system_pods.go:89] "registry-66c9cd494c-cfh4x" [7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2] Running
I0927 00:16:29.436191 8355 system_pods.go:89] "registry-proxy-pn662" [eb773589-5926-4f4f-8548-d2dee389a285] Running
I0927 00:16:29.436198 8355 system_pods.go:89] "snapshot-controller-56fcc65765-gdzpx" [f2bb6dd6-0f43-437c-a5b9-d91f084332f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:16:29.436209 8355 system_pods.go:89] "snapshot-controller-56fcc65765-jjm9x" [6aa4e47b-5902-4da5-a4a9-f6ccd932944c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:16:29.436213 8355 system_pods.go:89] "storage-provisioner" [17e1dac7-0278-4861-bbf2-9b70936db1b4] Running
I0927 00:16:29.436221 8355 system_pods.go:126] duration metric: took 207.17573ms to wait for k8s-apps to be running ...
I0927 00:16:29.436232 8355 system_svc.go:44] waiting for kubelet service to be running ....
I0927 00:16:29.436286 8355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0927 00:16:29.448511 8355 system_svc.go:56] duration metric: took 12.260841ms WaitForService to wait for kubelet
I0927 00:16:29.448539 8355 kubeadm.go:582] duration metric: took 40.230205109s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0927 00:16:29.448557 8355 node_conditions.go:102] verifying NodePressure condition ...
I0927 00:16:29.515913 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:29.628841 8355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0927 00:16:29.628870 8355 node_conditions.go:123] node cpu capacity is 2
I0927 00:16:29.628884 8355 node_conditions.go:105] duration metric: took 180.321357ms to run NodePressure ...
I0927 00:16:29.628897 8355 start.go:241] waiting for startup goroutines ...
I0927 00:16:29.887901 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:30.019814 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:30.388655 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:30.516276 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:30.888309 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:31.016826 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:31.387344 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:31.515798 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:31.887855 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:32.016762 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:32.388644 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:32.516703 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:32.889323 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:33.016690 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:33.388234 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:33.517506 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:33.889666 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:34.017067 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:34.393336 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:34.516704 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:34.888972 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:35.017502 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:35.387989 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:35.516621 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:35.889260 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:36.016903 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:36.387590 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:36.516484 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:36.889251 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:37.018454 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:37.387175 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:37.516855 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:37.888421 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:38.017428 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:38.388008 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:38.516284 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:38.887736 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:39.017081 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:39.388661 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:39.516244 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:39.889025 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:40.016646 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:40.388525 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:40.516180 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:40.888744 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:41.016698 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:41.388157 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:41.516980 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:41.887919 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:42.017076 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:42.388434 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:42.516434 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:42.890362 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:43.016994 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:43.388677 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:43.517754 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:43.888618 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:44.016243 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:44.388699 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:44.516696 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:44.889791 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:45.017419 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:45.389248 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:45.517001 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:45.887186 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:46.016775 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:46.388771 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:46.520835 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:46.888618 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:47.016245 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:47.391068 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:47.517463 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:47.888830 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:48.017254 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:48.387928 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:48.516380 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:48.888506 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:49.015714 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:49.387648 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:49.516299 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:49.950022 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:50.017022 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:50.388554 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:50.516494 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:50.887896 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:51.018663 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:51.387606 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:51.515757 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:51.889945 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:52.016739 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:52.389570 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:52.517201 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:52.888501 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:53.015958 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:53.387557 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:53.516723 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:53.887963 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:54.016545 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:54.387781 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:54.517994 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:54.887968 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:55.016658 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:55.387368 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:55.515881 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:55.887408 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:56.015711 8355 kapi.go:107] duration metric: took 53.004208057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0927 00:16:56.387360 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:56.888650 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:57.387095 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:57.887924 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:58.387462 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:58.887369 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:59.388194 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:16:59.886836 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:00.387644 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:00.888249 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:01.387981 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:01.888133 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:02.387236 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:02.888603 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:03.387580 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:03.887910 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:04.388113 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:04.888457 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:05.388422 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:05.887789 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:06.388454 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:06.889014 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:07.387580 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:07.888363 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:08.387758 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:08.888169 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:09.390317 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:09.892648 8355 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0927 00:17:10.388646 8355 kapi.go:107] duration metric: took 1m11.005413483s to wait for app.kubernetes.io/name=ingress-nginx ...
I0927 00:17:26.745172 8355 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0927 00:17:26.745192 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:27.246153 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:27.746026 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:28.246045 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:28.745918 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:29.245137 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:29.746505 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:30.246223 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:30.746210 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:31.246016 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:31.745715 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:32.245864 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:32.745127 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:33.246038 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:33.746298 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:34.245524 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:34.745458 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:35.245433 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:35.745396 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:36.246003 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:36.746234 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:37.245588 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:37.745315 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:38.245131 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:38.745966 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:39.245650 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:39.745247 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:40.245909 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:40.745846 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:41.247733 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:41.744900 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:42.245467 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:42.744951 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:43.245524 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:43.745418 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:44.245165 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:44.746106 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:45.246490 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:45.745418 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:46.245102 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:46.745921 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:47.245855 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:47.745223 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:48.245196 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:48.745936 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:49.245741 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:49.746109 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:50.246399 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:50.745085 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:51.246212 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:51.745711 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:52.245340 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:52.745492 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:53.246078 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:53.745880 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:54.245628 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:54.745948 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:55.245148 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:55.745871 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:56.245557 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:56.746716 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:57.245215 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:57.746119 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:58.245631 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:58.746046 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:59.245994 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:59.746862 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:00.245370 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:00.745784 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:01.245813 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:01.745437 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:02.245821 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:02.746479 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:03.245303 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:03.745102 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:04.245541 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:04.745328 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:05.245806 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:05.746644 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:06.244798 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:06.746659 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:07.246001 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:07.747134 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:08.245867 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:08.746814 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:09.245856 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:09.745399 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:10.245697 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:10.745722 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:11.252354 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:11.745387 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:12.245076 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:12.745376 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:13.245477 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:13.745643 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:14.246042 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:14.746022 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:15.245407 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:15.746454 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:16.244840 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:16.745555 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:17.245395 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:17.746617 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:18.245144 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:18.745849 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:19.245721 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:19.745979 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:20.245412 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:20.745538 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:21.245592 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:21.746382 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:22.245099 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:22.745758 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:23.245483 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:23.745442 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:24.246518 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:24.745272 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:25.245555 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:25.746792 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:26.245291 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:26.745704 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:27.246063 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:27.745810 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:28.245529 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:28.745619 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:29.244701 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:29.746460 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:30.245105 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:30.746103 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:31.245003 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:31.745955 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:32.245975 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:32.746702 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:33.245666 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:33.747981 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:34.245319 8355 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:18:34.745738 8355 kapi.go:107] duration metric: took 2m30.003995474s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0927 00:18:34.748011 8355 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-835847 cluster.
I0927 00:18:34.750026 8355 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0927 00:18:34.751842 8355 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0927 00:18:34.753590 8355 out.go:177] * Enabled addons: ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0927 00:18:34.755367 8355 addons.go:510] duration metric: took 2m45.536656652s for enable addons: enabled=[ingress-dns storage-provisioner-rancher nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0927 00:18:34.755422 8355 start.go:246] waiting for cluster config update ...
I0927 00:18:34.755444 8355 start.go:255] writing updated cluster config ...
I0927 00:18:34.755733 8355 ssh_runner.go:195] Run: rm -f paused
I0927 00:18:35.068794 8355 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0927 00:18:35.071548 8355 out.go:177] * Done! kubectl is now configured to use "addons-835847" cluster and "default" namespace by default
==> Docker <==
Sep 27 00:28:14 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3f209215e53b1404b23abee7940546f4ed9bc07968b89ddefbf1b54b6d86a97/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 27 00:28:15 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:15Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
Sep 27 00:28:21 addons-835847 dockerd[1284]: time="2024-09-27T00:28:21.556827857Z" level=info msg="ignoring event" container=50c5d43ae3331eab81e85bf51ff3a6948a48a7376e6e07e5b573e02ad258d826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:21 addons-835847 dockerd[1284]: time="2024-09-27T00:28:21.681046451Z" level=info msg="ignoring event" container=e3f209215e53b1404b23abee7940546f4ed9bc07968b89ddefbf1b54b6d86a97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.278515633Z" level=info msg="ignoring event" container=571b69b635a8758a3c5aa749398f0879078b7d33540952e5ac828ae81f196d31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.371556942Z" level=info msg="ignoring event" container=5dd7b2109168a8b604240546dca351210663bb3178aceb985051dbd6e7404e4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.383623841Z" level=info msg="ignoring event" container=047431751ba787cac6ea12753334a27a03bf4b7085faff84c16c331691a147e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.398696379Z" level=info msg="ignoring event" container=02872af86ff36ab5ee26842c998c09ed342f6bb76d30ecbc4538c3419094b914 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.406084270Z" level=info msg="ignoring event" container=f7c03209d552a21f72906d523edac4a2e0bac058c2dadcef925f03deb32e9e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.410017667Z" level=info msg="ignoring event" container=78e4b11a9cc699c0613269f6e265bfaa2e751028449ef840d122df88a62ff7d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.454200660Z" level=info msg="ignoring event" container=3b9efd2fa9867692860a2beba3c9d0a330e8d0fc7d58b419f108397ec01dbc25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.471929832Z" level=info msg="ignoring event" container=de679a3c1cb52e2944f8d62e62db7dcb077635c9345283d125811fdf765cb58a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.600275710Z" level=info msg="ignoring event" container=4d2b09e15e6f09fe7504e0fa848e4ac215abbbdc071276c6ac863a7529578970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.669481033Z" level=info msg="ignoring event" container=232d9b0f19a3c81c1c3c3d96c4badbd7237201333ecbc54a3a4729a13ee65d17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:23 addons-835847 dockerd[1284]: time="2024-09-27T00:28:23.707077300Z" level=info msg="ignoring event" container=9369eed71bdf12db0359ac78626ed55dcfcca30e4228cb5c7be746876b525a4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:29 addons-835847 dockerd[1284]: time="2024-09-27T00:28:29.886276734Z" level=info msg="ignoring event" container=444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:29 addons-835847 dockerd[1284]: time="2024-09-27T00:28:29.901546157Z" level=info msg="ignoring event" container=d971adaad037c1f787aeeaef3fb4f1643b6b34322f2eb4c4f5b32457b49fbcf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.065645242Z" level=info msg="ignoring event" container=25811c9ebe8787cf62dc73e5ffce884af6314161cdf7bd96a0c6ce670726fb7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.143754606Z" level=info msg="ignoring event" container=7ceb584e51fd3c870e423f721f931dd271a50234e0e0ab5324aff5e593fad238 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:30 addons-835847 dockerd[1284]: time="2024-09-27T00:28:30.500867739Z" level=info msg="ignoring event" container=c94749d12ab9612939db7c0a7fbeba8c61c6bf02890b760c127d1c6bbb5c634a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.053018304Z" level=info msg="ignoring event" container=7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.133491160Z" level=info msg="ignoring event" container=925ecbc7f57cabd7ab39dc74270d893f34d143bb4c0be03ecd693fe771229367 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.333453560Z" level=info msg="ignoring event" container=f6b1af5fb78c37bf2b8d6a858a1c710dc43b137dac00bc49e2c4699dc339cba1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:31 addons-835847 dockerd[1284]: time="2024-09-27T00:28:31.452934553Z" level=info msg="ignoring event" container=6f5a76e7c0d42e555b12a2554b17dbc681d53cca9e0ea34e6b08883b506053dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:28:31 addons-835847 cri-dockerd[1544]: time="2024-09-27T00:28:31Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-pn662_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6f5a76e7c0d42e555b12a2554b17dbc681d53cca9e0ea34e6b08883b506053dc\""
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
ecc4d5db675d2 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 36 seconds ago Exited gadget 7 be6deb646d3ab gadget-vgn99
2c231eb43d312 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 59fda7f313330 gcp-auth-89d5ffd79-zh66d
9363655aec613 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 f016c22499a0d ingress-nginx-controller-bc57996ff-p24hh
5a67fde840c70 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 2ee370dae128c ingress-nginx-admission-patch-2vwz7
c241abdb2d040 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 016b997eb5c41 ingress-nginx-admission-create-zmrlk
062615b753050 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 cea03d1f057cf yakd-dashboard-67d98fc6b-5zgdz
b8a26a48dd957 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 aceb0abcbba69 metrics-server-84c5f94fbc-5ck7c
dee6d4ee2c7c8 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 62f5d14093b8a local-path-provisioner-86d989889c-q2t26
e24271cdd951e gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 04dbcab398158 cloud-spanner-emulator-5b584cc74-nmwkl
dab4141b089d3 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 3331e14188e68 kube-ingress-dns-minikube
f16d5fef10c74 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 547d52adae155 nvidia-device-plugin-daemonset-pxf2p
7f26961be53af ba04bb24b9575 12 minutes ago Running storage-provisioner 0 4d3a88573a6cd storage-provisioner
5b8816dfaa21d 2f6c962e7b831 12 minutes ago Running coredns 0 f6f0726e5877d coredns-7c65d6cfc9-tvzhv
a8d675bfc6703 24a140c548c07 12 minutes ago Running kube-proxy 0 f68aef1cccba2 kube-proxy-sh55m
32e135bfac565 279f381cb3736 12 minutes ago Running kube-controller-manager 0 ecb9dc9058b4f kube-controller-manager-addons-835847
c9890999f9cce 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 fb5ff4baef843 kube-scheduler-addons-835847
0ba8c27835f8a 27e3830e14027 12 minutes ago Running etcd 0 7226344b1c107 etcd-addons-835847
d5a41b0f8c76b d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 08ffb47a70cbe kube-apiserver-addons-835847
==> controller_ingress [9363655aec61] <==
NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
I0927 00:17:09.101883 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0927 00:17:09.931154 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0927 00:17:09.992269 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0927 00:17:10.006565 7 nginx.go:271] "Starting NGINX Ingress controller"
I0927 00:17:10.027496 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a1815dc4-44fb-4e28-9732-b133414d44e7", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0927 00:17:10.036185 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d1d8441a-17c1-43d1-8f2b-6332918e5a69", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0927 00:17:10.036222 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"3a094131-3bd8-4760-91d4-c48fc23e469a", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0927 00:17:11.208141 7 nginx.go:317] "Starting NGINX process"
I0927 00:17:11.208253 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0927 00:17:11.208487 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0927 00:17:11.208657 7 controller.go:193] "Configuration changes detected, backend reload required"
I0927 00:17:11.217218 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0927 00:17:11.218203 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-p24hh"
I0927 00:17:11.227246 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-p24hh" node="addons-835847"
I0927 00:17:11.252216 7 controller.go:213] "Backend successfully reloaded"
I0927 00:17:11.252303 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0927 00:17:11.252452 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-p24hh", UID:"b6d5039f-a25c-4fdd-a953-8ab0bdc94a32", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [5b8816dfaa21] <==
[INFO] 10.244.0.8:44184 - 51573 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092372s
[INFO] 10.244.0.8:44184 - 24699 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002398412s
[INFO] 10.244.0.8:44184 - 2339 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002213111s
[INFO] 10.244.0.8:44184 - 48605 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145966s
[INFO] 10.244.0.8:44184 - 1809 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00010989s
[INFO] 10.244.0.8:49095 - 57180 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000295396s
[INFO] 10.244.0.8:49095 - 57418 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105737s
[INFO] 10.244.0.8:50055 - 27977 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063195s
[INFO] 10.244.0.8:50055 - 28413 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067543s
[INFO] 10.244.0.8:57616 - 59023 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074485s
[INFO] 10.244.0.8:57616 - 59204 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038736s
[INFO] 10.244.0.8:41998 - 38974 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003091405s
[INFO] 10.244.0.8:41998 - 38802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003982752s
[INFO] 10.244.0.8:35760 - 21191 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079301s
[INFO] 10.244.0.8:35760 - 21380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040918s
[INFO] 10.244.0.25:58942 - 63161 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195516s
[INFO] 10.244.0.25:47346 - 49039 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140125s
[INFO] 10.244.0.25:33720 - 9844 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127251s
[INFO] 10.244.0.25:49152 - 46325 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091707s
[INFO] 10.244.0.25:40956 - 39954 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096876s
[INFO] 10.244.0.25:59262 - 12709 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086341s
[INFO] 10.244.0.25:52788 - 61254 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002751226s
[INFO] 10.244.0.25:57471 - 47251 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002596284s
[INFO] 10.244.0.25:35907 - 20023 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00200994s
[INFO] 10.244.0.25:33455 - 57395 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001557271s
==> describe nodes <==
Name: addons-835847
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-835847
kubernetes.io/os=linux
minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
minikube.k8s.io/name=addons-835847
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_27T00_15_44_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-835847
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 27 Sep 2024 00:15:41 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-835847
AcquireTime: <unset>
RenewTime: Fri, 27 Sep 2024 00:28:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 27 Sep 2024 00:27:47 +0000 Fri, 27 Sep 2024 00:15:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 27 Sep 2024 00:27:47 +0000 Fri, 27 Sep 2024 00:15:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 27 Sep 2024 00:27:47 +0000 Fri, 27 Sep 2024 00:15:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 27 Sep 2024 00:27:47 +0000 Fri, 27 Sep 2024 00:15:41 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-835847
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: fa692830ed544dfda90a0dde21ecfabb
System UUID: d753872c-7080-4426-b42a-b70d7a7c1bc7
Boot ID: fe6ac0e5-a46e-47ee-84bc-0bc2ad3e866e
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (17 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m16s
default cloud-spanner-emulator-5b584cc74-nmwkl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-vgn99 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-zh66d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-p24hh 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-tvzhv 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-835847 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-835847 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-835847 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-sh55m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-835847 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-5ck7c 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-pxf2p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-q2t26 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-5zgdz 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 588Mi (7%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-835847 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-835847 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-835847 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-835847 event: Registered Node addons-835847 in Controller
==> dmesg <==
[Sep26 23:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014578] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.452316] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.063261] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.019563] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.667102] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.021396] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [0ba8c27835f8] <==
{"level":"info","ts":"2024-09-27T00:15:37.505502Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-09-27T00:15:37.505537Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-09-27T00:15:37.676115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-27T00:15:37.676395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-27T00:15:37.676571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-27T00:15:37.676727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-27T00:15:37.676867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-27T00:15:37.677028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-27T00:15:37.677136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-27T00:15:37.679697Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:37.682361Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-835847 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-27T00:15:37.684129Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-27T00:15:37.684770Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-27T00:15:37.685770Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-27T00:15:37.686940Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-27T00:15:37.688129Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:37.691721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:37.691878Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:37.688853Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-27T00:15:37.693169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-27T00:15:37.694900Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-27T00:15:37.694928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-27T00:25:38.427524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
{"level":"info","ts":"2024-09-27T00:25:38.472611Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"44.194266ms","hash":3342251583,"current-db-size-bytes":9072640,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4923392,"current-db-size-in-use":"4.9 MB"}
{"level":"info","ts":"2024-09-27T00:25:38.472659Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3342251583,"revision":1862,"compact-revision":-1}
==> gcp-auth [2c231eb43d31] <==
2024/09/27 00:18:34 GCP Auth Webhook started!
2024/09/27 00:18:51 Ready to marshal response ...
2024/09/27 00:18:51 Ready to write response ...
2024/09/27 00:18:52 Ready to marshal response ...
2024/09/27 00:18:52 Ready to write response ...
2024/09/27 00:19:15 Ready to marshal response ...
2024/09/27 00:19:15 Ready to write response ...
2024/09/27 00:19:16 Ready to marshal response ...
2024/09/27 00:19:16 Ready to write response ...
2024/09/27 00:19:16 Ready to marshal response ...
2024/09/27 00:19:16 Ready to write response ...
2024/09/27 00:27:19 Ready to marshal response ...
2024/09/27 00:27:19 Ready to write response ...
2024/09/27 00:27:19 Ready to marshal response ...
2024/09/27 00:27:19 Ready to write response ...
2024/09/27 00:27:19 Ready to marshal response ...
2024/09/27 00:27:19 Ready to write response ...
2024/09/27 00:27:30 Ready to marshal response ...
2024/09/27 00:27:30 Ready to write response ...
2024/09/27 00:27:48 Ready to marshal response ...
2024/09/27 00:27:48 Ready to write response ...
2024/09/27 00:28:14 Ready to marshal response ...
2024/09/27 00:28:14 Ready to write response ...
==> kernel <==
00:28:32 up 1:11, 0 users, load average: 0.28, 0.36, 0.38
Linux addons-835847 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [d5a41b0f8c76] <==
I0927 00:19:06.424299 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0927 00:19:06.502332 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0927 00:19:06.572962 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0927 00:19:06.683568 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0927 00:19:06.947479 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0927 00:19:07.163385 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0927 00:19:07.163412 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0927 00:19:07.168280 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0927 00:19:07.573264 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0927 00:19:07.833031 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0927 00:27:19.836774 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.230.114"}
I0927 00:27:56.062605 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0927 00:28:29.601054 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0927 00:28:29.601116 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0927 00:28:29.622077 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0927 00:28:29.622236 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0927 00:28:29.629209 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0927 00:28:29.629248 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0927 00:28:29.672624 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0927 00:28:29.673081 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0927 00:28:29.769430 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0927 00:28:29.769470 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0927 00:28:30.623389 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0927 00:28:30.770440 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0927 00:28:30.879449 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [32e135bfac56] <==
E0927 00:27:58.022590 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:05.100394 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:05.100440 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:07.893216 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:07.893262 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:11.238307 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:11.238351 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:15.488532 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:15.488605 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0927 00:28:23.184354 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
I0927 00:28:23.276676 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
I0927 00:28:23.447457 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-835847"
W0927 00:28:23.932780 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:23.932828 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0927 00:28:29.799098 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="6.556µs"
E0927 00:28:30.625717 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0927 00:28:30.772177 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0927 00:28:30.881661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0927 00:28:30.968506 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.884µs"
W0927 00:28:32.174814 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:32.174854 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:32.275202 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:32.275246 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:28:32.337181 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:28:32.337231 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [a8d675bfc670] <==
I0927 00:15:50.541800 1 server_linux.go:66] "Using iptables proxy"
I0927 00:15:50.655046 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0927 00:15:50.655117 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0927 00:15:50.696958 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0927 00:15:50.697012 1 server_linux.go:169] "Using iptables Proxier"
I0927 00:15:50.698509 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0927 00:15:50.698823 1 server.go:483] "Version info" version="v1.31.1"
I0927 00:15:50.698837 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0927 00:15:50.700772 1 config.go:199] "Starting service config controller"
I0927 00:15:50.700797 1 shared_informer.go:313] Waiting for caches to sync for service config
I0927 00:15:50.700826 1 config.go:105] "Starting endpoint slice config controller"
I0927 00:15:50.700830 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0927 00:15:50.700840 1 config.go:328] "Starting node config controller"
I0927 00:15:50.700852 1 shared_informer.go:313] Waiting for caches to sync for node config
I0927 00:15:50.801108 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0927 00:15:50.801161 1 shared_informer.go:320] Caches are synced for service config
I0927 00:15:50.801361 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [c9890999f9cc] <==
W0927 00:15:42.161470 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0927 00:15:42.161566 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0927 00:15:42.161636 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0927 00:15:42.161498 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.161789 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:42.161818 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.161945 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0927 00:15:42.162013 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162034 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0927 00:15:42.162107 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162242 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0927 00:15:42.162381 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162411 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0927 00:15:42.162646 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162463 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:42.162972 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162362 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0927 00:15:42.163235 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162505 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0927 00:15:42.163777 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162557 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0927 00:15:42.164194 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:42.162301 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0927 00:15:42.164432 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0927 00:15:43.749707 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.411030 2340 scope.go:117] "RemoveContainer" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.457974 2340 scope.go:117] "RemoveContainer" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
Sep 27 00:28:30 addons-835847 kubelet[2340]: E0927 00:28:30.459227 2340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c" containerID="444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.459260 2340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"} err="failed to get container status \"444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 444e4f0a957334f051380f9bff75c5a5e02fd8f8a787d682c8e2c42cfea5497c"
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.602967 2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds\") pod \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\" (UID: \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\") "
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.603156 2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhmht\" (UniqueName: \"kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht\") pod \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\" (UID: \"5a9bb701-661e-4834-9c85-c40c6ad26b6f\") "
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.603451 2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5a9bb701-661e-4834-9c85-c40c6ad26b6f" (UID: "5a9bb701-661e-4834-9c85-c40c6ad26b6f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.605369 2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht" (OuterVolumeSpecName: "kube-api-access-rhmht") pod "5a9bb701-661e-4834-9c85-c40c6ad26b6f" (UID: "5a9bb701-661e-4834-9c85-c40c6ad26b6f"). InnerVolumeSpecName "kube-api-access-rhmht". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.703556 2340 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a9bb701-661e-4834-9c85-c40c6ad26b6f-gcp-creds\") on node \"addons-835847\" DevicePath \"\""
Sep 27 00:28:30 addons-835847 kubelet[2340]: I0927 00:28:30.703596 2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rhmht\" (UniqueName: \"kubernetes.io/projected/5a9bb701-661e-4834-9c85-c40c6ad26b6f-kube-api-access-rhmht\") on node \"addons-835847\" DevicePath \"\""
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.512518 2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vk4p\" (UniqueName: \"kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p\") pod \"7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2\" (UID: \"7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2\") "
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.524408 2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p" (OuterVolumeSpecName: "kube-api-access-4vk4p") pod "7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2" (UID: "7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2"). InnerVolumeSpecName "kube-api-access-4vk4p". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.544832 2340 scope.go:117] "RemoveContainer" containerID="925ecbc7f57cabd7ab39dc74270d893f34d143bb4c0be03ecd693fe771229367"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.588472 2340 scope.go:117] "RemoveContainer" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.612774 2340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xmd\" (UniqueName: \"kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd\") pod \"eb773589-5926-4f4f-8548-d2dee389a285\" (UID: \"eb773589-5926-4f4f-8548-d2dee389a285\") "
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.612915 2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4vk4p\" (UniqueName: \"kubernetes.io/projected/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2-kube-api-access-4vk4p\") on node \"addons-835847\" DevicePath \"\""
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.616004 2340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd" (OuterVolumeSpecName: "kube-api-access-j7xmd") pod "eb773589-5926-4f4f-8548-d2dee389a285" (UID: "eb773589-5926-4f4f-8548-d2dee389a285"). InnerVolumeSpecName "kube-api-access-j7xmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.676823 2340 scope.go:117] "RemoveContainer" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
Sep 27 00:28:31 addons-835847 kubelet[2340]: E0927 00:28:31.678316 2340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25" containerID="7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.678367 2340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"} err="failed to get container status \"7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7bd0556fe79cb8c07f3c4beeced86717ccc8e927ae5c2b86e8c1d498e61a0d25"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.713716 2340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j7xmd\" (UniqueName: \"kubernetes.io/projected/eb773589-5926-4f4f-8548-d2dee389a285-kube-api-access-j7xmd\") on node \"addons-835847\" DevicePath \"\""
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.819384 2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9bb701-661e-4834-9c85-c40c6ad26b6f" path="/var/lib/kubelet/pods/5a9bb701-661e-4834-9c85-c40c6ad26b6f/volumes"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.819811 2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa4e47b-5902-4da5-a4a9-f6ccd932944c" path="/var/lib/kubelet/pods/6aa4e47b-5902-4da5-a4a9-f6ccd932944c/volumes"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.820354 2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2" path="/var/lib/kubelet/pods/7f0b7f16-5783-4ad7-9e56-e6e6b7ebddf2/volumes"
Sep 27 00:28:31 addons-835847 kubelet[2340]: I0927 00:28:31.820715 2340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb6dd6-0f43-437c-a5b9-d91f084332f5" path="/var/lib/kubelet/pods/f2bb6dd6-0f43-437c-a5b9-d91f084332f5/volumes"
==> storage-provisioner [7f26961be53a] <==
I0927 00:15:56.338504 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0927 00:15:56.358085 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0927 00:15:56.358171 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0927 00:15:56.384983 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0927 00:15:56.385369 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"175f1bd0-9e7f-4586-abbf-cb5aea70e889", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e became leader
I0927 00:15:56.385395 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e!
I0927 00:15:56.486781 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-835847_89578564-4b80-478e-ad16-0b6cb68ab36e!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-835847 -n addons-835847
helpers_test.go:261: (dbg) Run: kubectl --context addons-835847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7: exit status 1 (93.226593ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-835847/192.168.49.2
Start Time: Fri, 27 Sep 2024 00:19:16 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29ff8 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-29ff8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m17s default-scheduler Successfully assigned default/busybox to addons-835847
Warning Failed 7m58s (x6 over 9m16s) kubelet Error: ImagePullBackOff
Normal Pulling 7m44s (x4 over 9m17s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m43s (x4 over 9m17s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m43s (x4 over 9m17s) kubelet Error: ErrImagePull
Normal BackOff 4m14s (x21 over 9m16s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-zmrlk" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-2vwz7" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-835847 describe pod busybox ingress-nginx-admission-create-zmrlk ingress-nginx-admission-patch-2vwz7: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.31s)