=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.282719ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005386312s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003671686s
addons_test.go:338: (dbg) Run: kubectl --context addons-703944 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.11131623s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-arm64 -p addons-703944 ip
2024/09/30 10:34:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-arm64 -p addons-703944 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-703944
helpers_test.go:235: (dbg) docker inspect addons-703944:
-- stdout --
[
{
"Id": "6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3",
"Created": "2024-09-30T10:21:01.950380753Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8882,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-30T10:21:02.101402153Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
"ResolvConfPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/hostname",
"HostsPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/hosts",
"LogPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3-json.log",
"Name": "/addons-703944",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-703944:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-703944",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341-init/diff:/var/lib/docker/overlay2/617a358269990fa6af831f14aa0a1cf249355fc559e21616870630a688e89f21/diff",
"MergedDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/merged",
"UpperDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/diff",
"WorkDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-703944",
"Source": "/var/lib/docker/volumes/addons-703944/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-703944",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-703944",
"name.minikube.sigs.k8s.io": "addons-703944",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "85d364dc746bc1ce06d8b03501ac5a967ba05830aa47aff44bcf1bc33f7e0da3",
"SandboxKey": "/var/run/docker/netns/85d364dc746b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-703944": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "f903481ea82c7fe80c306cb66548f367f308b7e33d8f02c92e4a74c877559ea7",
"EndpointID": "dbea6e478a8d244a877708b0f077cd418ec819855b0b951a50fe93ad9f76343c",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-703944",
"6ba2c206eb4f"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-703944 -n addons-703944
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-703944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 logs -n 25: (1.157087331s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | -p download-only-464574 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p download-only-464574 | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | -o=json --download-only | download-only-328857 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | -p download-only-328857 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p download-only-328857 | download-only-328857 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p download-only-464574 | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p download-only-328857 | download-only-328857 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | --download-only -p | download-docker-398252 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | download-docker-398252 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-398252 | download-docker-398252 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | --download-only -p | binary-mirror-159609 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | binary-mirror-159609 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:34175 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-159609 | binary-mirror-159609 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| addons | disable dashboard -p | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | addons-703944 | | | | | |
| addons | enable dashboard -p | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | addons-703944 | | | | | |
| start | -p addons-703944 --wait=true | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:24 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-703944 addons disable | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-703944 addons disable | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-703944 addons | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-703944 addons | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
| | -p addons-703944 | | | | | |
| ssh | addons-703944 ssh cat | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
| | /opt/local-path-provisioner/pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4_default_test-pvc/file1 | | | | | |
| addons | addons-703944 addons disable | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-703944 ip | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
| addons | addons-703944 addons disable | addons-703944 | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/30 10:20:38
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0930 10:20:38.157760 8372 out.go:345] Setting OutFile to fd 1 ...
I0930 10:20:38.157902 8372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:20:38.157934 8372 out.go:358] Setting ErrFile to fd 2...
I0930 10:20:38.157953 8372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:20:38.158680 8372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:20:38.159214 8372 out.go:352] Setting JSON to false
I0930 10:20:38.160048 8372 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":187,"bootTime":1727691452,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0930 10:20:38.160118 8372 start.go:139] virtualization:
I0930 10:20:38.165835 8372 out.go:177] * [addons-703944] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0930 10:20:38.177032 8372 notify.go:220] Checking for updates...
I0930 10:20:38.199157 8372 out.go:177] - MINIKUBE_LOCATION=19734
I0930 10:20:38.222607 8372 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0930 10:20:38.239899 8372 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
I0930 10:20:38.255011 8372 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
I0930 10:20:38.266973 8372 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0930 10:20:38.277170 8372 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0930 10:20:38.286468 8372 driver.go:394] Setting default libvirt URI to qemu:///system
I0930 10:20:38.306533 8372 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0930 10:20:38.306691 8372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0930 10:20:38.363805 8372 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:20:38.354707309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0930 10:20:38.363917 8372 docker.go:318] overlay module found
I0930 10:20:38.393744 8372 out.go:177] * Using the docker driver based on user configuration
I0930 10:20:38.421072 8372 start.go:297] selected driver: docker
I0930 10:20:38.421097 8372 start.go:901] validating driver "docker" against <nil>
I0930 10:20:38.421112 8372 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0930 10:20:38.421739 8372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0930 10:20:38.481156 8372 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:20:38.472339623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0930 10:20:38.481370 8372 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0930 10:20:38.481604 8372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0930 10:20:38.489171 8372 out.go:177] * Using Docker driver with root privileges
I0930 10:20:38.500672 8372 cni.go:84] Creating CNI manager for ""
I0930 10:20:38.500756 8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:20:38.500776 8372 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0930 10:20:38.500861 8372 start.go:340] cluster config:
{Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0930 10:20:38.513247 8372 out.go:177] * Starting "addons-703944" primary control-plane node in "addons-703944" cluster
I0930 10:20:38.521393 8372 cache.go:121] Beginning downloading kic base image for docker with docker
I0930 10:20:38.529772 8372 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
I0930 10:20:38.538499 8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:20:38.538551 8372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0930 10:20:38.538563 8372 cache.go:56] Caching tarball of preloaded images
I0930 10:20:38.538593 8372 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
I0930 10:20:38.538643 8372 preload.go:172] Found /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0930 10:20:38.538653 8372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0930 10:20:38.539014 8372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json ...
I0930 10:20:38.539094 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json: {Name:mk3b2c38eac4f5deeba0c330b8da3185b9a33420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:20:38.554140 8372 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
I0930 10:20:38.554242 8372 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
I0930 10:20:38.554259 8372 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
I0930 10:20:38.554263 8372 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
I0930 10:20:38.554270 8372 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
I0930 10:20:38.554275 8372 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
I0930 10:20:54.942038 8372 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
I0930 10:20:54.942078 8372 cache.go:194] Successfully downloaded all kic artifacts
I0930 10:20:54.942116 8372 start.go:360] acquireMachinesLock for addons-703944: {Name:mk960c67440ef6a65350b6922242ffb4f2c250f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 10:20:54.942234 8372 start.go:364] duration metric: took 97.852µs to acquireMachinesLock for "addons-703944"
I0930 10:20:54.942277 8372 start.go:93] Provisioning new machine with config: &{Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0930 10:20:54.942348 8372 start.go:125] createHost starting for "" (driver="docker")
I0930 10:20:54.944863 8372 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0930 10:20:54.945093 8372 start.go:159] libmachine.API.Create for "addons-703944" (driver="docker")
I0930 10:20:54.945129 8372 client.go:168] LocalClient.Create starting
I0930 10:20:54.945254 8372 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem
I0930 10:20:55.810310 8372 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem
I0930 10:20:55.977860 8372 cli_runner.go:164] Run: docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0930 10:20:55.993083 8372 cli_runner.go:211] docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0930 10:20:55.993171 8372 network_create.go:284] running [docker network inspect addons-703944] to gather additional debugging logs...
I0930 10:20:55.993192 8372 cli_runner.go:164] Run: docker network inspect addons-703944
W0930 10:20:56.007308 8372 cli_runner.go:211] docker network inspect addons-703944 returned with exit code 1
I0930 10:20:56.007343 8372 network_create.go:287] error running [docker network inspect addons-703944]: docker network inspect addons-703944: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-703944 not found
I0930 10:20:56.007356 8372 network_create.go:289] output of [docker network inspect addons-703944]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-703944 not found
** /stderr **
I0930 10:20:56.007475 8372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 10:20:56.024062 8372 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d0fe0}
I0930 10:20:56.024114 8372 network_create.go:124] attempt to create docker network addons-703944 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0930 10:20:56.024168 8372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-703944 addons-703944
I0930 10:20:56.089395 8372 network_create.go:108] docker network addons-703944 192.168.49.0/24 created
I0930 10:20:56.089428 8372 kic.go:121] calculated static IP "192.168.49.2" for the "addons-703944" container
I0930 10:20:56.089497 8372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0930 10:20:56.103963 8372 cli_runner.go:164] Run: docker volume create addons-703944 --label name.minikube.sigs.k8s.io=addons-703944 --label created_by.minikube.sigs.k8s.io=true
I0930 10:20:56.121739 8372 oci.go:103] Successfully created a docker volume addons-703944
I0930 10:20:56.121829 8372 cli_runner.go:164] Run: docker run --rm --name addons-703944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --entrypoint /usr/bin/test -v addons-703944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
I0930 10:20:58.244441 8372 cli_runner.go:217] Completed: docker run --rm --name addons-703944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --entrypoint /usr/bin/test -v addons-703944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.122562309s)
I0930 10:20:58.244468 8372 oci.go:107] Successfully prepared a docker volume addons-703944
I0930 10:20:58.244490 8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:20:58.244511 8372 kic.go:194] Starting extracting preloaded images to volume ...
I0930 10:20:58.244586 8372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-703944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
I0930 10:21:01.890711 8372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-703944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.646088019s)
I0930 10:21:01.890739 8372 kic.go:203] duration metric: took 3.646226191s to extract preloaded images to volume ...
W0930 10:21:01.890884 8372 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0930 10:21:01.891005 8372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0930 10:21:01.936298 8372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-703944 --name addons-703944 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-703944 --network addons-703944 --ip 192.168.49.2 --volume addons-703944:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
I0930 10:21:02.265370 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Running}}
I0930 10:21:02.292978 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:02.315213 8372 cli_runner.go:164] Run: docker exec addons-703944 stat /var/lib/dpkg/alternatives/iptables
I0930 10:21:02.377492 8372 oci.go:144] the created container "addons-703944" has a running status.
I0930 10:21:02.377520 8372 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa...
I0930 10:21:03.309591 8372 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0930 10:21:03.342119 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:03.358416 8372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0930 10:21:03.358434 8372 kic_runner.go:114] Args: [docker exec --privileged addons-703944 chown docker:docker /home/docker/.ssh/authorized_keys]
I0930 10:21:03.409034 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:03.425135 8372 machine.go:93] provisionDockerMachine start ...
I0930 10:21:03.425219 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:03.441464 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:03.441744 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:03.441754 8372 main.go:141] libmachine: About to run SSH command:
hostname
I0930 10:21:03.566472 8372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703944
I0930 10:21:03.566496 8372 ubuntu.go:169] provisioning hostname "addons-703944"
I0930 10:21:03.566558 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:03.583023 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:03.583250 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:03.583269 8372 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-703944 && echo "addons-703944" | sudo tee /etc/hostname
I0930 10:21:03.722513 8372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703944
I0930 10:21:03.722665 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:03.740624 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:03.740863 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:03.740887 8372 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-703944' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703944/g' /etc/hosts;
else
echo '127.0.1.1 addons-703944' | sudo tee -a /etc/hosts;
fi
fi
I0930 10:21:03.871108 8372 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0930 10:21:03.871132 8372 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-2285/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-2285/.minikube}
I0930 10:21:03.871160 8372 ubuntu.go:177] setting up certificates
I0930 10:21:03.871172 8372 provision.go:84] configureAuth start
I0930 10:21:03.871235 8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
I0930 10:21:03.887464 8372 provision.go:143] copyHostCerts
I0930 10:21:03.887567 8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/ca.pem (1082 bytes)
I0930 10:21:03.887703 8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/cert.pem (1123 bytes)
I0930 10:21:03.887764 8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/key.pem (1679 bytes)
I0930 10:21:03.887816 8372 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem org=jenkins.addons-703944 san=[127.0.0.1 192.168.49.2 addons-703944 localhost minikube]
I0930 10:21:04.203465 8372 provision.go:177] copyRemoteCerts
I0930 10:21:04.203529 8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0930 10:21:04.203602 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:04.219315 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:04.311934 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0930 10:21:04.334878 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0930 10:21:04.357437 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0930 10:21:04.380013 8372 provision.go:87] duration metric: took 508.828698ms to configureAuth
I0930 10:21:04.380041 8372 ubuntu.go:193] setting minikube options for container-runtime
I0930 10:21:04.380227 8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:21:04.380283 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:04.396140 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:04.396380 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:04.396398 8372 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0930 10:21:04.523628 8372 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0930 10:21:04.523691 8372 ubuntu.go:71] root file system type: overlay
I0930 10:21:04.523826 8372 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0930 10:21:04.523894 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:04.540175 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:04.540416 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:04.540498 8372 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0930 10:21:04.678175 8372 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0930 10:21:04.678267 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:04.695200 8372 main.go:141] libmachine: Using SSH client type: native
I0930 10:21:04.695446 8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0930 10:21:04.695472 8372 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0930 10:21:05.431890 8372 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-20 11:39:18.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-30 10:21:04.672933741 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0930 10:21:05.431926 8372 machine.go:96] duration metric: took 2.006772955s to provisionDockerMachine
I0930 10:21:05.431955 8372 client.go:171] duration metric: took 10.486795644s to LocalClient.Create
I0930 10:21:05.431977 8372 start.go:167] duration metric: took 10.48688462s to libmachine.API.Create "addons-703944"
I0930 10:21:05.431989 8372 start.go:293] postStartSetup for "addons-703944" (driver="docker")
I0930 10:21:05.431999 8372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0930 10:21:05.432073 8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0930 10:21:05.432117 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:05.448707 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:05.540917 8372 ssh_runner.go:195] Run: cat /etc/os-release
I0930 10:21:05.544101 8372 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0930 10:21:05.544136 8372 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0930 10:21:05.544147 8372 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0930 10:21:05.544154 8372 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0930 10:21:05.544165 8372 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2285/.minikube/addons for local assets ...
I0930 10:21:05.544235 8372 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2285/.minikube/files for local assets ...
I0930 10:21:05.544257 8372 start.go:296] duration metric: took 112.262994ms for postStartSetup
I0930 10:21:05.544570 8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
I0930 10:21:05.562225 8372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json ...
I0930 10:21:05.562514 8372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0930 10:21:05.562556 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:05.579501 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:05.668448 8372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0930 10:21:05.673159 8372 start.go:128] duration metric: took 10.730792488s to createHost
I0930 10:21:05.673186 8372 start.go:83] releasing machines lock for "addons-703944", held for 10.730936813s
I0930 10:21:05.673275 8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
I0930 10:21:05.690651 8372 ssh_runner.go:195] Run: cat /version.json
I0930 10:21:05.690673 8372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0930 10:21:05.690702 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:05.690743 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:05.714372 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:05.715110 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:05.928828 8372 ssh_runner.go:195] Run: systemctl --version
I0930 10:21:05.933048 8372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0930 10:21:05.937102 8372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0930 10:21:05.961718 8372 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0930 10:21:05.961796 8372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0930 10:21:05.989449 8372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0930 10:21:05.989475 8372 start.go:495] detecting cgroup driver to use...
I0930 10:21:05.989531 8372 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0930 10:21:05.989646 8372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0930 10:21:06.005791 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0930 10:21:06.015917 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0930 10:21:06.026026 8372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0930 10:21:06.026099 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0930 10:21:06.036080 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0930 10:21:06.045847 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0930 10:21:06.055760 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0930 10:21:06.065113 8372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0930 10:21:06.074082 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0930 10:21:06.083606 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0930 10:21:06.092916 8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0930 10:21:06.102335 8372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0930 10:21:06.110555 8372 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0930 10:21:06.110617 8372 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0930 10:21:06.123769 8372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0930 10:21:06.133124 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:06.213493 8372 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0930 10:21:06.310878 8372 start.go:495] detecting cgroup driver to use...
I0930 10:21:06.310970 8372 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0930 10:21:06.311038 8372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0930 10:21:06.323620 8372 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0930 10:21:06.323743 8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0930 10:21:06.336518 8372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0930 10:21:06.352700 8372 ssh_runner.go:195] Run: which cri-dockerd
I0930 10:21:06.356641 8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0930 10:21:06.370675 8372 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0930 10:21:06.393790 8372 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0930 10:21:06.495078 8372 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0930 10:21:06.592988 8372 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0930 10:21:06.593192 8372 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0930 10:21:06.612228 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:06.702672 8372 ssh_runner.go:195] Run: sudo systemctl restart docker
I0930 10:21:06.966149 8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0930 10:21:06.978367 8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0930 10:21:06.990181 8372 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0930 10:21:07.087304 8372 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0930 10:21:07.174889 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:07.264638 8372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0930 10:21:07.278180 8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0930 10:21:07.289029 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:07.374836 8372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0930 10:21:07.440462 8372 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0930 10:21:07.440610 8372 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0930 10:21:07.444403 8372 start.go:563] Will wait 60s for crictl version
I0930 10:21:07.444498 8372 ssh_runner.go:195] Run: which crictl
I0930 10:21:07.447529 8372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0930 10:21:07.487388 8372 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0930 10:21:07.487500 8372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0930 10:21:07.510098 8372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0930 10:21:07.535171 8372 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0930 10:21:07.535262 8372 cli_runner.go:164] Run: docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 10:21:07.550126 8372 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0930 10:21:07.554591 8372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0930 10:21:07.564545 8372 kubeadm.go:883] updating cluster {Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0930 10:21:07.564653 8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:21:07.564712 8372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0930 10:21:07.581874 8372 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0930 10:21:07.581896 8372 docker.go:615] Images already preloaded, skipping extraction
I0930 10:21:07.581957 8372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0930 10:21:07.597772 8372 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0930 10:21:07.597796 8372 cache_images.go:84] Images are preloaded, skipping loading
I0930 10:21:07.597806 8372 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0930 10:21:07.597894 8372 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-703944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0930 10:21:07.597965 8372 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0930 10:21:07.637730 8372 cni.go:84] Creating CNI manager for ""
I0930 10:21:07.637754 8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:21:07.637767 8372 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0930 10:21:07.637785 8372 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703944 NodeName:addons-703944 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0930 10:21:07.637918 8372 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-703944"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0930 10:21:07.637981 8372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0930 10:21:07.646415 8372 binaries.go:44] Found k8s binaries, skipping transfer
I0930 10:21:07.646481 8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0930 10:21:07.654646 8372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0930 10:21:07.671757 8372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0930 10:21:07.688999 8372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0930 10:21:07.706483 8372 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0930 10:21:07.709896 8372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0930 10:21:07.720493 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:07.806936 8372 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0930 10:21:07.821569 8372 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944 for IP: 192.168.49.2
I0930 10:21:07.821593 8372 certs.go:194] generating shared ca certs ...
I0930 10:21:07.821608 8372 certs.go:226] acquiring lock for ca certs: {Name:mkc88472a42ce604780a44bea1d376b9310242a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:07.821794 8372 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key
I0930 10:21:08.354917 8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt ...
I0930 10:21:08.354948 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt: {Name:mk0122201555ccaf3ca9f01ed4cca7b90ae5dd97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:08.355149 8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key ...
I0930 10:21:08.355163 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key: {Name:mk2840bc2a90336af3902da6afe8ca59e0524fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:08.355246 8372 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key
I0930 10:21:08.976758 8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt ...
I0930 10:21:08.976790 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt: {Name:mkf7e924be88e949e3d1ab2bf1b7abc89be2b043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:08.976968 8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key ...
I0930 10:21:08.976982 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key: {Name:mk62a332fb42a376154b13d7505da29694ef318f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:08.977065 8372 certs.go:256] generating profile certs ...
I0930 10:21:08.977132 8372 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key
I0930 10:21:08.977150 8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt with IP's: []
I0930 10:21:09.383494 8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt ...
I0930 10:21:09.383524 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: {Name:mk9569155d0419e7620e5d2199494fc166cba673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:09.383713 8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key ...
I0930 10:21:09.383725 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key: {Name:mk3fffa1cbe5623ed803ed09d54abece76021bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:09.383805 8372 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3
I0930 10:21:09.383825 8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0930 10:21:09.873500 8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 ...
I0930 10:21:09.873532 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3: {Name:mk80d304c28e346d7d2e04279240ca0c4b77a39d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:09.873703 8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3 ...
I0930 10:21:09.873720 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3: {Name:mk3f80cc3300ee08b20cc8b5409dde06169ea865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:09.873802 8372 certs.go:381] copying /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 -> /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt
I0930 10:21:09.873882 8372 certs.go:385] copying /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3 -> /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key
I0930 10:21:09.873939 8372 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key
I0930 10:21:09.873959 8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt with IP's: []
I0930 10:21:10.129164 8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt ...
I0930 10:21:10.129194 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt: {Name:mk6e3c176c4ef0ca48e94a7ac5538637829aba39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:10.129369 8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key ...
I0930 10:21:10.129381 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key: {Name:mk11d8a42c8c8407475e87c9983b10099aac5b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:10.129572 8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem (1679 bytes)
I0930 10:21:10.129615 8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem (1082 bytes)
I0930 10:21:10.129646 8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem (1123 bytes)
I0930 10:21:10.129675 8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem (1679 bytes)
I0930 10:21:10.130263 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0930 10:21:10.155189 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0930 10:21:10.179371 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0930 10:21:10.203231 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0930 10:21:10.225986 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0930 10:21:10.249634 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0930 10:21:10.271747 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0930 10:21:10.294496 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0930 10:21:10.316969 8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0930 10:21:10.340181 8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0930 10:21:10.356666 8372 ssh_runner.go:195] Run: openssl version
I0930 10:21:10.361834 8372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0930 10:21:10.371169 8372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:10.374489 8372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:10.374566 8372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:10.380955 8372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0930 10:21:10.389832 8372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0930 10:21:10.392773 8372 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0930 10:21:10.392818 8372 kubeadm.go:392] StartCluster: {Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0930 10:21:10.392941 8372 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0930 10:21:10.409630 8372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0930 10:21:10.417798 8372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0930 10:21:10.425727 8372 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0930 10:21:10.425788 8372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0930 10:21:10.434141 8372 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0930 10:21:10.434161 8372 kubeadm.go:157] found existing configuration files:
I0930 10:21:10.434211 8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0930 10:21:10.442852 8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0930 10:21:10.442913 8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0930 10:21:10.451147 8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0930 10:21:10.459727 8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0930 10:21:10.459811 8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0930 10:21:10.468160 8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0930 10:21:10.476841 8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0930 10:21:10.476925 8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0930 10:21:10.484925 8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0930 10:21:10.493068 8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0930 10:21:10.493180 8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0930 10:21:10.500516 8372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0930 10:21:10.539350 8372 kubeadm.go:310] W0930 10:21:10.538657 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0930 10:21:10.540496 8372 kubeadm.go:310] W0930 10:21:10.539942 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0930 10:21:10.563129 8372 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0930 10:21:10.622791 8372 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0930 10:21:26.483725 8372 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0930 10:21:26.483781 8372 kubeadm.go:310] [preflight] Running pre-flight checks
I0930 10:21:26.483871 8372 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0930 10:21:26.483928 8372 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0930 10:21:26.483966 8372 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0930 10:21:26.484015 8372 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0930 10:21:26.484065 8372 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0930 10:21:26.484115 8372 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0930 10:21:26.484168 8372 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0930 10:21:26.484217 8372 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0930 10:21:26.484270 8372 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0930 10:21:26.484325 8372 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0930 10:21:26.484377 8372 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0930 10:21:26.484426 8372 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0930 10:21:26.484498 8372 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0930 10:21:26.484597 8372 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0930 10:21:26.484687 8372 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0930 10:21:26.484751 8372 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0930 10:21:26.486786 8372 out.go:235] - Generating certificates and keys ...
I0930 10:21:26.486876 8372 kubeadm.go:310] [certs] Using existing ca certificate authority
I0930 10:21:26.486969 8372 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0930 10:21:26.487045 8372 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0930 10:21:26.487121 8372 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0930 10:21:26.487203 8372 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0930 10:21:26.487257 8372 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0930 10:21:26.487312 8372 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0930 10:21:26.487438 8372 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-703944 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0930 10:21:26.487507 8372 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0930 10:21:26.487671 8372 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-703944 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0930 10:21:26.487757 8372 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0930 10:21:26.487834 8372 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0930 10:21:26.487884 8372 kubeadm.go:310] [certs] Generating "sa" key and public key
I0930 10:21:26.487964 8372 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0930 10:21:26.488045 8372 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0930 10:21:26.488134 8372 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0930 10:21:26.488212 8372 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0930 10:21:26.488304 8372 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0930 10:21:26.488383 8372 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0930 10:21:26.488489 8372 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0930 10:21:26.488587 8372 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0930 10:21:26.490439 8372 out.go:235] - Booting up control plane ...
I0930 10:21:26.490588 8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0930 10:21:26.490682 8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0930 10:21:26.490755 8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0930 10:21:26.490858 8372 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0930 10:21:26.490940 8372 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0930 10:21:26.490978 8372 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0930 10:21:26.491109 8372 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0930 10:21:26.491210 8372 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0930 10:21:26.491266 8372 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001640709s
I0930 10:21:26.491337 8372 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0930 10:21:26.491392 8372 kubeadm.go:310] [api-check] The API server is healthy after 7.001261086s
I0930 10:21:26.491495 8372 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0930 10:21:26.491643 8372 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0930 10:21:26.491701 8372 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0930 10:21:26.491877 8372 kubeadm.go:310] [mark-control-plane] Marking the node addons-703944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0930 10:21:26.491932 8372 kubeadm.go:310] [bootstrap-token] Using token: lsod6c.4t1f64okr2pfpgmx
I0930 10:21:26.494068 8372 out.go:235] - Configuring RBAC rules ...
I0930 10:21:26.494248 8372 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0930 10:21:26.494379 8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0930 10:21:26.494573 8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0930 10:21:26.494724 8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0930 10:21:26.494849 8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0930 10:21:26.494944 8372 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0930 10:21:26.495073 8372 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0930 10:21:26.495122 8372 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0930 10:21:26.495173 8372 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0930 10:21:26.495180 8372 kubeadm.go:310]
I0930 10:21:26.495244 8372 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0930 10:21:26.495253 8372 kubeadm.go:310]
I0930 10:21:26.495333 8372 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0930 10:21:26.495340 8372 kubeadm.go:310]
I0930 10:21:26.495367 8372 kubeadm.go:310] mkdir -p $HOME/.kube
I0930 10:21:26.495436 8372 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0930 10:21:26.495492 8372 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0930 10:21:26.495499 8372 kubeadm.go:310]
I0930 10:21:26.495579 8372 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0930 10:21:26.495589 8372 kubeadm.go:310]
I0930 10:21:26.495642 8372 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0930 10:21:26.495650 8372 kubeadm.go:310]
I0930 10:21:26.495705 8372 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0930 10:21:26.495788 8372 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0930 10:21:26.495864 8372 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0930 10:21:26.495872 8372 kubeadm.go:310]
I0930 10:21:26.495961 8372 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0930 10:21:26.496046 8372 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0930 10:21:26.496053 8372 kubeadm.go:310]
I0930 10:21:26.496142 8372 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lsod6c.4t1f64okr2pfpgmx \
I0930 10:21:26.496254 8372 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:745106e23a5f99f7b5cf3f70fc5b7fa08e737936aedd27a5a99b20714a4f1180 \
I0930 10:21:26.496279 8372 kubeadm.go:310] --control-plane
I0930 10:21:26.496286 8372 kubeadm.go:310]
I0930 10:21:26.496376 8372 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0930 10:21:26.496384 8372 kubeadm.go:310]
I0930 10:21:26.496471 8372 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lsod6c.4t1f64okr2pfpgmx \
I0930 10:21:26.496594 8372 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:745106e23a5f99f7b5cf3f70fc5b7fa08e737936aedd27a5a99b20714a4f1180
I0930 10:21:26.496606 8372 cni.go:84] Creating CNI manager for ""
I0930 10:21:26.496619 8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:21:26.498891 8372 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0930 10:21:26.500878 8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0930 10:21:26.509550 8372 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0930 10:21:26.528465 8372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0930 10:21:26.528552 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:26.528592 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703944 minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-703944 minikube.k8s.io/primary=true
I0930 10:21:26.544102 8372 ops.go:34] apiserver oom_adj: -16
I0930 10:21:26.673171 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:27.173851 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:27.674211 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:28.174030 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:28.674124 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:29.173214 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:29.674085 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:30.173949 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:30.673776 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:31.173362 8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:31.298798 8372 kubeadm.go:1113] duration metric: took 4.770307554s to wait for elevateKubeSystemPrivileges
I0930 10:21:31.298839 8372 kubeadm.go:394] duration metric: took 20.906025265s to StartCluster
I0930 10:21:31.298856 8372 settings.go:142] acquiring lock: {Name:mkcf2de35d43f3b73031cab05addbe76685d61d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:31.298979 8372 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19734-2285/kubeconfig
I0930 10:21:31.299357 8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/kubeconfig: {Name:mk4ffb7b34cf58f060bd905874f12e785542fb79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:31.299599 8372 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0930 10:21:31.299750 8372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0930 10:21:31.300001 8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:21:31.300036 8372 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0930 10:21:31.300111 8372 addons.go:69] Setting yakd=true in profile "addons-703944"
I0930 10:21:31.300128 8372 addons.go:234] Setting addon yakd=true in "addons-703944"
I0930 10:21:31.300151 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.300640 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.301045 8372 addons.go:69] Setting inspektor-gadget=true in profile "addons-703944"
I0930 10:21:31.301066 8372 addons.go:234] Setting addon inspektor-gadget=true in "addons-703944"
I0930 10:21:31.301091 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.301539 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.301695 8372 addons.go:69] Setting metrics-server=true in profile "addons-703944"
I0930 10:21:31.301721 8372 addons.go:234] Setting addon metrics-server=true in "addons-703944"
I0930 10:21:31.301814 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.302252 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.304914 8372 addons.go:69] Setting cloud-spanner=true in profile "addons-703944"
I0930 10:21:31.304943 8372 addons.go:234] Setting addon cloud-spanner=true in "addons-703944"
I0930 10:21:31.304970 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.305036 8372 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-703944"
I0930 10:21:31.305054 8372 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-703944"
I0930 10:21:31.305076 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.305413 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.305488 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.306014 8372 addons.go:69] Setting registry=true in profile "addons-703944"
I0930 10:21:31.306037 8372 addons.go:234] Setting addon registry=true in "addons-703944"
I0930 10:21:31.306064 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.306482 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.312882 8372 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-703944"
I0930 10:21:31.312950 8372 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-703944"
I0930 10:21:31.312983 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.313450 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.318745 8372 addons.go:69] Setting storage-provisioner=true in profile "addons-703944"
I0930 10:21:31.318833 8372 addons.go:234] Setting addon storage-provisioner=true in "addons-703944"
I0930 10:21:31.318895 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.320024 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.330320 8372 addons.go:69] Setting default-storageclass=true in profile "addons-703944"
I0930 10:21:31.330359 8372 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-703944"
I0930 10:21:31.330785 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.343868 8372 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-703944"
I0930 10:21:31.343964 8372 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703944"
I0930 10:21:31.345969 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.350237 8372 addons.go:69] Setting gcp-auth=true in profile "addons-703944"
I0930 10:21:31.382515 8372 mustload.go:65] Loading cluster: addons-703944
I0930 10:21:31.382766 8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:21:31.383047 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.366578 8372 addons.go:69] Setting volcano=true in profile "addons-703944"
I0930 10:21:31.385775 8372 addons.go:234] Setting addon volcano=true in "addons-703944"
I0930 10:21:31.385817 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.366592 8372 addons.go:69] Setting volumesnapshots=true in profile "addons-703944"
I0930 10:21:31.366735 8372 out.go:177] * Verifying Kubernetes components...
I0930 10:21:31.367840 8372 addons.go:69] Setting ingress=true in profile "addons-703944"
I0930 10:21:31.398536 8372 addons.go:234] Setting addon ingress=true in "addons-703944"
I0930 10:21:31.398588 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.399115 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.403134 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.367852 8372 addons.go:69] Setting ingress-dns=true in profile "addons-703944"
I0930 10:21:31.405421 8372 addons.go:234] Setting addon ingress-dns=true in "addons-703944"
I0930 10:21:31.405517 8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0930 10:21:31.405985 8372 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0930 10:21:31.407975 8372 addons.go:234] Setting addon volumesnapshots=true in "addons-703944"
I0930 10:21:31.408172 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.408732 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.415145 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.421720 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.408063 8372 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0930 10:21:31.425317 8372 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0930 10:21:31.425338 8372 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0930 10:21:31.425445 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.446626 8372 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0930 10:21:31.446657 8372 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0930 10:21:31.446772 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.474558 8372 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
I0930 10:21:31.474932 8372 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0930 10:21:31.475167 8372 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0930 10:21:31.484832 8372 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0930 10:21:31.484895 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0930 10:21:31.484987 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.502194 8372 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0930 10:21:31.502214 8372 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0930 10:21:31.502274 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.503789 8372 out.go:177] - Using image docker.io/registry:2.8.3
I0930 10:21:31.503965 8372 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0930 10:21:31.505428 8372 addons.go:234] Setting addon default-storageclass=true in "addons-703944"
I0930 10:21:31.505458 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.505868 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.527674 8372 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0930 10:21:31.527710 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0930 10:21:31.527778 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.555321 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.565756 8372 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0930 10:21:31.565843 8372 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0930 10:21:31.565908 8372 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0930 10:21:31.566321 8372 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0930 10:21:31.566326 8372 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0930 10:21:31.583829 8372 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0930 10:21:31.584222 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0930 10:21:31.584287 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.588565 8372 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:31.588591 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0930 10:21:31.588653 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.620895 8372 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0930 10:21:31.627002 8372 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0930 10:21:31.627033 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I0930 10:21:31.627116 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.644368 8372 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0930 10:21:31.655758 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0930 10:21:31.659779 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0930 10:21:31.660469 8372 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0930 10:21:31.663614 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0930 10:21:31.665797 8372 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0930 10:21:31.669554 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0930 10:21:31.670782 8372 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0930 10:21:31.671236 8372 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0930 10:21:31.671698 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0930 10:21:31.671819 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.693114 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0930 10:21:31.696548 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0930 10:21:31.697114 8372 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0930 10:21:31.697204 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.671269 8372 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0930 10:21:31.704274 8372 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0930 10:21:31.704295 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0930 10:21:31.704429 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.717088 8372 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0930 10:21:31.719027 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0930 10:21:31.719049 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0930 10:21:31.719132 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.725191 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.727275 8372 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-703944"
I0930 10:21:31.727316 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:31.728473 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:31.751664 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.764668 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.766406 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.766970 8372 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:31.767041 8372 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0930 10:21:31.767111 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.792312 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.844010 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.845937 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.850552 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.896841 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.901318 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.909904 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.913702 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.917842 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:31.919677 8372 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0930 10:21:31.921500 8372 out.go:177] - Using image docker.io/busybox:stable
I0930 10:21:31.923622 8372 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0930 10:21:31.923642 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0930 10:21:31.923708 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:31.959169 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:32.398712 8372 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.098929353s)
I0930 10:21:32.398847 8372 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0930 10:21:32.399007 8372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0930 10:21:32.657942 8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0930 10:21:32.657970 8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0930 10:21:32.659636 8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0930 10:21:32.659658 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0930 10:21:32.758143 8372 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0930 10:21:32.758167 8372 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0930 10:21:32.871192 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0930 10:21:32.883632 8372 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0930 10:21:32.883656 8372 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0930 10:21:32.894807 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0930 10:21:32.894845 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0930 10:21:32.901370 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0930 10:21:32.920056 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0930 10:21:32.969056 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0930 10:21:32.977060 8372 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0930 10:21:32.977084 8372 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0930 10:21:32.981476 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0930 10:21:32.987439 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:33.057364 8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0930 10:21:33.057389 8372 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0930 10:21:33.065849 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:33.069139 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0930 10:21:33.091688 8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0930 10:21:33.091713 8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0930 10:21:33.095095 8372 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0930 10:21:33.095126 8372 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0930 10:21:33.152667 8372 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0930 10:21:33.152692 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0930 10:21:33.159271 8372 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0930 10:21:33.159294 8372 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0930 10:21:33.188320 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0930 10:21:33.188345 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0930 10:21:33.284080 8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0930 10:21:33.284105 8372 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0930 10:21:33.291865 8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0930 10:21:33.291893 8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0930 10:21:33.382333 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0930 10:21:33.384878 8372 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0930 10:21:33.384933 8372 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0930 10:21:33.388552 8372 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0930 10:21:33.388600 8372 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0930 10:21:33.524691 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0930 10:21:33.524753 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0930 10:21:33.618633 8372 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0930 10:21:33.618709 8372 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0930 10:21:33.663230 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0930 10:21:33.665128 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0930 10:21:33.665197 8372 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0930 10:21:33.697269 8372 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0930 10:21:33.697345 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0930 10:21:33.761523 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0930 10:21:33.761611 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0930 10:21:33.913196 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0930 10:21:33.982595 8372 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0930 10:21:33.982623 8372 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0930 10:21:34.023134 8372 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:34.023158 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0930 10:21:34.075580 8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0930 10:21:34.075605 8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0930 10:21:34.218980 8372 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
I0930 10:21:34.219004 8372 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
I0930 10:21:34.408594 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0930 10:21:34.408617 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0930 10:21:34.439059 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:34.654541 8372 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.255481607s)
I0930 10:21:34.654571 8372 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0930 10:21:34.654631 8372 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.25568396s)
I0930 10:21:34.655505 8372 node_ready.go:35] waiting up to 6m0s for node "addons-703944" to be "Ready" ...
I0930 10:21:34.659936 8372 node_ready.go:49] node "addons-703944" has status "Ready":"True"
I0930 10:21:34.659961 8372 node_ready.go:38] duration metric: took 4.425472ms for node "addons-703944" to be "Ready" ...
I0930 10:21:34.659973 8372 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0930 10:21:34.673234 8372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace to be "Ready" ...
I0930 10:21:34.820538 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0930 10:21:34.820564 8372 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0930 10:21:34.866838 8372 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0930 10:21:34.866866 8372 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0930 10:21:35.159508 8372 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703944" context rescaled to 1 replicas
I0930 10:21:35.171056 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0930 10:21:35.171137 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0930 10:21:35.255473 8372 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0930 10:21:35.255563 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
I0930 10:21:35.426546 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0930 10:21:35.426613 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0930 10:21:35.491685 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0930 10:21:35.605009 8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0930 10:21:35.605080 8372 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0930 10:21:36.489253 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0930 10:21:36.699285 8372 pod_ready.go:103] pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:38.576505 8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0930 10:21:38.576617 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:38.603925 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:39.179284 8372 pod_ready.go:103] pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:39.531293 8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0930 10:21:39.773675 8372 addons.go:234] Setting addon gcp-auth=true in "addons-703944"
I0930 10:21:39.773770 8372 host.go:66] Checking if "addons-703944" exists ...
I0930 10:21:39.774246 8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
I0930 10:21:39.801168 8372 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0930 10:21:39.801224 8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
I0930 10:21:39.826415 8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
I0930 10:21:41.176006 8372 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-h47vt" not found
I0930 10:21:41.176079 8372 pod_ready.go:82] duration metric: took 6.502809994s for pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace to be "Ready" ...
E0930 10:21:41.176104 8372 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-h47vt" not found
I0930 10:21:41.176202 8372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.180696 8372 pod_ready.go:93] pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.180786 8372 pod_ready.go:82] duration metric: took 4.553988ms for pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.180818 8372 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.184950 8372 pod_ready.go:93] pod "etcd-addons-703944" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.185013 8372 pod_ready.go:82] duration metric: took 4.158753ms for pod "etcd-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.185037 8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.192708 8372 pod_ready.go:93] pod "kube-apiserver-addons-703944" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.192775 8372 pod_ready.go:82] duration metric: took 7.717973ms for pod "kube-apiserver-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.192806 8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.197260 8372 pod_ready.go:93] pod "kube-controller-manager-addons-703944" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.197284 8372 pod_ready.go:82] duration metric: took 4.446289ms for pod "kube-controller-manager-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.197294 8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl4mj" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.394941 8372 pod_ready.go:93] pod "kube-proxy-xl4mj" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.395009 8372 pod_ready.go:82] duration metric: took 197.707672ms for pod "kube-proxy-xl4mj" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.395037 8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.777931 8372 pod_ready.go:93] pod "kube-scheduler-addons-703944" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.777996 8372 pod_ready.go:82] duration metric: took 382.937765ms for pod "kube-scheduler-addons-703944" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.778031 8372 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace to be "Ready" ...
I0930 10:21:43.813374 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.942145722s)
I0930 10:21:43.813562 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.912172546s)
I0930 10:21:43.813577 8372 addons.go:475] Verifying addon ingress=true in "addons-703944"
I0930 10:21:43.813614 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.893519068s)
I0930 10:21:43.813662 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.844583889s)
I0930 10:21:43.813712 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.832216838s)
I0930 10:21:43.813902 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.826442044s)
I0930 10:21:43.814014 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.748141301s)
I0930 10:21:43.814050 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.744891426s)
I0930 10:21:43.814082 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.431691049s)
I0930 10:21:43.814091 8372 addons.go:475] Verifying addon registry=true in "addons-703944"
I0930 10:21:43.814414 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.151092694s)
I0930 10:21:43.814445 8372 addons.go:475] Verifying addon metrics-server=true in "addons-703944"
I0930 10:21:43.814486 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.901264855s)
I0930 10:21:43.814765 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.375677644s)
W0930 10:21:43.814805 8372 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0930 10:21:43.814827 8372 retry.go:31] will retry after 166.069544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0930 10:21:43.814907 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.323148215s)
I0930 10:21:43.816237 8372 out.go:177] * Verifying ingress addon...
I0930 10:21:43.817137 8372 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-703944 service yakd-dashboard -n yakd-dashboard
I0930 10:21:43.817141 8372 out.go:177] * Verifying registry addon...
I0930 10:21:43.820257 8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0930 10:21:43.821244 8372 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0930 10:21:43.849106 8372 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0930 10:21:43.849183 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:43.850206 8372 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0930 10:21:43.854698 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W0930 10:21:43.867076 8372 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0930 10:21:43.886706 8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:43.981460 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:44.363424 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:44.364717 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:44.855224 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:44.855943 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:44.929927 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.440582076s)
I0930 10:21:44.929957 8372 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-703944"
I0930 10:21:44.929968 8372 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.128775032s)
I0930 10:21:44.932752 8372 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0930 10:21:44.932870 8372 out.go:177] * Verifying csi-hostpath-driver addon...
I0930 10:21:44.935869 8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:21:44.937707 8372 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0930 10:21:44.940093 8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0930 10:21:44.940118 8372 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0930 10:21:44.951785 8372 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:21:44.951864 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:45.058118 8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0930 10:21:45.058191 8372 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0930 10:21:45.117716 8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0930 10:21:45.117792 8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0930 10:21:45.168871 8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0930 10:21:45.327408 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:45.327689 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:45.441536 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:45.827211 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:45.828039 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:45.941179 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:46.035244 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.053738796s)
I0930 10:21:46.284559 8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:46.330313 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:46.332565 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:46.449123 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:46.495807 8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.326848386s)
I0930 10:21:46.498689 8372 addons.go:475] Verifying addon gcp-auth=true in "addons-703944"
I0930 10:21:46.501624 8372 out.go:177] * Verifying gcp-auth addon...
I0930 10:21:46.505155 8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0930 10:21:46.545326 8372 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0930 10:21:46.823669 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:46.826154 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:46.942058 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:47.327518 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:47.328967 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:47.442381 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:47.825525 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:47.826138 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:47.941270 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:48.324233 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:48.326193 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:48.440663 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:48.783921 8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:48.826265 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:48.827093 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:48.941352 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:49.284537 8372 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:49.284558 8372 pod_ready.go:82] duration metric: took 7.50650461s for pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace to be "Ready" ...
I0930 10:21:49.284568 8372 pod_ready.go:39] duration metric: took 14.624551967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0930 10:21:49.284586 8372 api_server.go:52] waiting for apiserver process to appear ...
I0930 10:21:49.284645 8372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:49.303939 8372 api_server.go:72] duration metric: took 18.004303838s to wait for apiserver process to appear ...
I0930 10:21:49.303964 8372 api_server.go:88] waiting for apiserver healthz status ...
I0930 10:21:49.303986 8372 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0930 10:21:49.311528 8372 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0930 10:21:49.312569 8372 api_server.go:141] control plane version: v1.31.1
I0930 10:21:49.312593 8372 api_server.go:131] duration metric: took 8.621856ms to wait for apiserver health ...
I0930 10:21:49.312606 8372 system_pods.go:43] waiting for kube-system pods to appear ...
I0930 10:21:49.322110 8372 system_pods.go:59] 17 kube-system pods found
I0930 10:21:49.322146 8372 system_pods.go:61] "coredns-7c65d6cfc9-whncm" [46a80f84-c5a3-4343-a13b-c43c9e972bea] Running
I0930 10:21:49.322157 8372 system_pods.go:61] "csi-hostpath-attacher-0" [fae93b3c-422c-4801-b3b3-e2abfa21edfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0930 10:21:49.322165 8372 system_pods.go:61] "csi-hostpath-resizer-0" [73ed8b8e-3373-4fd3-9185-afb6c7da7d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0930 10:21:49.322174 8372 system_pods.go:61] "csi-hostpathplugin-k6tp6" [cbada5b7-306c-4194-a282-af2298bf3ca0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0930 10:21:49.322185 8372 system_pods.go:61] "etcd-addons-703944" [509b84cb-de0e-4191-bfc1-11eca5bf513c] Running
I0930 10:21:49.322193 8372 system_pods.go:61] "kube-apiserver-addons-703944" [99576047-d72e-4965-b471-24c7cc8754ed] Running
I0930 10:21:49.322206 8372 system_pods.go:61] "kube-controller-manager-addons-703944" [78f7b3b2-6425-4493-a8a2-8638fa09817d] Running
I0930 10:21:49.322213 8372 system_pods.go:61] "kube-ingress-dns-minikube" [c9a50869-6b2b-4991-8768-56022a305760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0930 10:21:49.322222 8372 system_pods.go:61] "kube-proxy-xl4mj" [a24923c1-7646-42d3-a132-c59589ed9310] Running
I0930 10:21:49.322227 8372 system_pods.go:61] "kube-scheduler-addons-703944" [c83f2380-e6dc-48f9-8d9b-588f3bc7fa34] Running
I0930 10:21:49.322233 8372 system_pods.go:61] "metrics-server-84c5f94fbc-72src" [2328d76c-f121-44e6-894a-b82153cbb0b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0930 10:21:49.322240 8372 system_pods.go:61] "nvidia-device-plugin-daemonset-ftwnl" [8b10a7e7-ec39-4b16-8d9f-33979a0e6e8d] Running
I0930 10:21:49.322246 8372 system_pods.go:61] "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0930 10:21:49.322252 8372 system_pods.go:61] "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0930 10:21:49.322259 8372 system_pods.go:61] "snapshot-controller-56fcc65765-kth5m" [d7c3897c-dd10-4c1e-a9ce-f2691e7f1c92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:49.322268 8372 system_pods.go:61] "snapshot-controller-56fcc65765-pssjz" [a7de780e-cc75-4d90-9860-be9d0ba459d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:49.322274 8372 system_pods.go:61] "storage-provisioner" [d9f9be36-ec15-42fe-ae1c-03e9bd9fbd83] Running
I0930 10:21:49.322287 8372 system_pods.go:74] duration metric: took 9.674084ms to wait for pod list to return data ...
I0930 10:21:49.322293 8372 default_sa.go:34] waiting for default service account to be created ...
I0930 10:21:49.327243 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:49.328174 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:49.328883 8372 default_sa.go:45] found service account: "default"
I0930 10:21:49.328939 8372 default_sa.go:55] duration metric: took 6.636242ms for default service account to be created ...
I0930 10:21:49.328963 8372 system_pods.go:116] waiting for k8s-apps to be running ...
I0930 10:21:49.338613 8372 system_pods.go:86] 17 kube-system pods found
I0930 10:21:49.338652 8372 system_pods.go:89] "coredns-7c65d6cfc9-whncm" [46a80f84-c5a3-4343-a13b-c43c9e972bea] Running
I0930 10:21:49.338723 8372 system_pods.go:89] "csi-hostpath-attacher-0" [fae93b3c-422c-4801-b3b3-e2abfa21edfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0930 10:21:49.338737 8372 system_pods.go:89] "csi-hostpath-resizer-0" [73ed8b8e-3373-4fd3-9185-afb6c7da7d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0930 10:21:49.338746 8372 system_pods.go:89] "csi-hostpathplugin-k6tp6" [cbada5b7-306c-4194-a282-af2298bf3ca0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0930 10:21:49.338751 8372 system_pods.go:89] "etcd-addons-703944" [509b84cb-de0e-4191-bfc1-11eca5bf513c] Running
I0930 10:21:49.338757 8372 system_pods.go:89] "kube-apiserver-addons-703944" [99576047-d72e-4965-b471-24c7cc8754ed] Running
I0930 10:21:49.338762 8372 system_pods.go:89] "kube-controller-manager-addons-703944" [78f7b3b2-6425-4493-a8a2-8638fa09817d] Running
I0930 10:21:49.338770 8372 system_pods.go:89] "kube-ingress-dns-minikube" [c9a50869-6b2b-4991-8768-56022a305760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0930 10:21:49.338774 8372 system_pods.go:89] "kube-proxy-xl4mj" [a24923c1-7646-42d3-a132-c59589ed9310] Running
I0930 10:21:49.338780 8372 system_pods.go:89] "kube-scheduler-addons-703944" [c83f2380-e6dc-48f9-8d9b-588f3bc7fa34] Running
I0930 10:21:49.338802 8372 system_pods.go:89] "metrics-server-84c5f94fbc-72src" [2328d76c-f121-44e6-894a-b82153cbb0b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0930 10:21:49.338813 8372 system_pods.go:89] "nvidia-device-plugin-daemonset-ftwnl" [8b10a7e7-ec39-4b16-8d9f-33979a0e6e8d] Running
I0930 10:21:49.338819 8372 system_pods.go:89] "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0930 10:21:49.338825 8372 system_pods.go:89] "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0930 10:21:49.338832 8372 system_pods.go:89] "snapshot-controller-56fcc65765-kth5m" [d7c3897c-dd10-4c1e-a9ce-f2691e7f1c92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:49.338842 8372 system_pods.go:89] "snapshot-controller-56fcc65765-pssjz" [a7de780e-cc75-4d90-9860-be9d0ba459d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:49.338846 8372 system_pods.go:89] "storage-provisioner" [d9f9be36-ec15-42fe-ae1c-03e9bd9fbd83] Running
I0930 10:21:49.338854 8372 system_pods.go:126] duration metric: took 9.879833ms to wait for k8s-apps to be running ...
I0930 10:21:49.338875 8372 system_svc.go:44] waiting for kubelet service to be running ....
I0930 10:21:49.338950 8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0930 10:21:49.350950 8372 system_svc.go:56] duration metric: took 12.07497ms WaitForService to wait for kubelet
I0930 10:21:49.350985 8372 kubeadm.go:582] duration metric: took 18.051354972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0930 10:21:49.351004 8372 node_conditions.go:102] verifying NodePressure condition ...
I0930 10:21:49.355053 8372 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0930 10:21:49.355085 8372 node_conditions.go:123] node cpu capacity is 2
I0930 10:21:49.355098 8372 node_conditions.go:105] duration metric: took 4.089593ms to run NodePressure ...
I0930 10:21:49.355110 8372 start.go:241] waiting for startup goroutines ...
I0930 10:21:49.441388 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:49.823847 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:49.826042 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:49.940655 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:50.325159 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:50.326405 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:50.440986 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:50.827343 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:50.829085 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:50.941775 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:51.324602 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:51.326132 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:51.440815 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:51.826145 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:51.827362 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:51.941728 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:52.325271 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:52.326138 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:52.440533 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:52.826198 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:52.827122 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:52.941249 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:53.324058 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:53.325347 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:53.441370 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:53.823442 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:53.826606 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:53.940994 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:54.326199 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:54.327427 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:54.441969 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:54.825387 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:54.825878 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:54.940229 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:55.324856 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:55.329490 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:55.440995 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:55.823861 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:55.824971 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:55.941194 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:56.324753 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:56.327070 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:56.441453 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:56.824285 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:56.826107 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:56.941264 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:57.325323 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:57.326200 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:57.440961 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:57.826247 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:57.826566 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:57.940763 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:58.325200 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:58.326227 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:58.440585 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:58.824887 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:58.826237 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:58.940599 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:59.324219 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:59.326410 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:59.441800 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:59.824953 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:59.826190 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:21:59.940908 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:00.332529 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:00.333393 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:00.441474 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:00.838704 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:00.839640 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:00.941564 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:01.325477 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:01.326401 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:01.441049 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:01.825689 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:01.826905 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:01.940753 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:02.326385 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:02.328244 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:02.440035 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:02.824940 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:02.827062 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:02.940660 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:03.325236 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:03.326119 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:03.441200 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:03.827566 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:22:03.828407 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:03.941029 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:04.326220 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:04.326860 8372 kapi.go:107] duration metric: took 20.506605548s to wait for kubernetes.io/minikube-addons=registry ...
I0930 10:22:04.440317 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:04.849977 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:04.941416 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:05.330212 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:05.441204 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:05.836535 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:05.942229 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:06.326491 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:06.441729 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:06.827023 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:06.941483 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:07.325877 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:07.440663 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:07.828721 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:07.940680 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:08.325396 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:08.440677 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:08.825686 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:08.941182 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:09.325899 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:09.441415 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:09.826041 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:09.941331 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:10.326220 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:10.442338 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:10.825571 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:10.941491 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:11.325783 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:11.440343 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:11.826929 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:11.941467 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:12.331713 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:12.441737 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:12.826124 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:12.941172 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:13.336469 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:13.441082 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:13.833734 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:13.941597 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:14.326172 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:14.445706 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:14.825427 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:14.940912 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:15.326618 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:15.441450 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:15.827754 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:15.941728 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:16.326148 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:16.440890 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:16.827726 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:16.941317 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:17.326016 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:17.441447 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:17.831594 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:17.941182 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:18.325022 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:18.440673 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:18.828704 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:18.942197 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:19.332454 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:19.440801 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:19.826573 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:19.940746 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:20.325197 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:20.440818 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:20.825775 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:20.949086 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:21.326660 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:21.442140 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:21.828582 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:21.941007 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:22.326434 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:22.440607 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:22.829341 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:22.941307 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:23.326986 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:23.445145 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:23.826927 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:23.940468 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:24.325541 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:24.440962 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:24.825602 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:24.941001 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:25.325805 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:25.440403 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:25.825868 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:25.940379 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:26.326757 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:26.440140 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:26.826132 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:26.942492 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:27.325914 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:27.441059 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:27.827179 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:27.940723 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:28.325997 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:28.441985 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:28.828752 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:28.941630 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:29.326437 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:29.441368 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:29.825212 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:29.940566 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:30.326029 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:30.443704 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:30.825630 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:30.940680 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:31.326280 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:31.440716 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:31.828490 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:31.941739 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:32.325894 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:32.440375 8372 kapi.go:107] duration metric: took 47.504506187s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0930 10:22:32.825680 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:33.335363 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:33.825391 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:34.326781 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:34.825960 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:35.326134 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:35.830381 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:36.325610 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:36.825857 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:37.325316 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:37.826609 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:38.325891 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:38.825773 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:39.325961 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:39.825937 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:40.325731 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:40.824963 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:41.326372 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:41.825528 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:42.325423 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:42.825271 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:43.326215 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:43.826600 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:44.325376 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:44.825846 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:45.327210 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:45.826193 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:46.326138 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:46.825637 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:47.326570 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:47.826190 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:48.324963 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:48.825086 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:49.325397 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:49.826049 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:50.326323 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:50.825378 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:51.326704 8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0930 10:22:51.846862 8372 kapi.go:107] duration metric: took 1m8.025610103s to wait for app.kubernetes.io/name=ingress-nginx ...
I0930 10:23:09.530064 8372 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0930 10:23:09.530091 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:10.008443 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:10.509519 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:11.012959 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:11.508671 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:12.008346 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:12.509288 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:13.009254 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:13.509245 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:14.009187 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:14.508450 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:15.009933 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:15.508584 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:16.009327 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:16.509357 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:17.008862 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:17.508535 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:18.008661 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:18.508635 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:19.008739 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:19.508699 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:20.009393 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:20.509067 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:21.009019 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:21.508624 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:22.009027 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:22.509077 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:23.008704 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:23.508307 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:24.009434 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:24.509274 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:25.009210 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:25.508590 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:26.009303 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:26.508719 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:27.008809 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:27.508685 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:28.009839 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:28.508979 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:29.008501 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:29.508723 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:30.009361 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:30.509590 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:31.009070 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:31.508492 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:32.008485 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:32.509146 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:33.008457 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:33.508938 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:34.008619 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:34.509031 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:35.008897 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:35.511641 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:36.009524 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:36.508444 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:37.009305 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:37.508978 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:38.008984 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:38.509635 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:39.008640 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:39.508364 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:40.008716 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:40.508142 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:41.008385 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:41.509192 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:42.008675 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:42.508294 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:43.008944 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:43.508511 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:44.009478 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:44.508765 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:45.009523 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:45.509745 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:46.009251 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:46.508186 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:47.009024 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:47.508433 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:48.009108 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:48.509353 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:49.009333 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:49.508514 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:50.010503 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:50.509337 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:51.008575 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:51.509735 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:52.009459 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:52.508905 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:53.008708 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:53.509213 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:54.009535 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:54.509276 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:55.009115 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:55.510790 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:56.011378 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:56.508862 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:57.008439 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:57.509378 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:58.008716 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:58.508739 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:59.008642 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:23:59.509198 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:00.011907 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:00.508311 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:01.008911 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:01.508566 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:02.010121 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:02.508453 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:03.009046 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:03.508540 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:04.008432 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:04.508909 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:05.009330 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:05.508470 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:06.009316 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:06.508766 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:07.007991 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:07.508723 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:08.008934 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:08.509797 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:09.008388 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:09.510026 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:10.009073 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:10.508590 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:11.009408 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:11.508784 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:12.008683 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:12.508801 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:13.009096 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:13.508367 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:14.009427 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:14.509264 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:15.019060 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:15.508791 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:16.013116 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:16.508264 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:17.009352 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:17.509284 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:18.010118 8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:24:18.509041 8372 kapi.go:107] duration metric: took 2m32.00388476s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0930 10:24:18.511665 8372 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-703944 cluster.
I0930 10:24:18.514452 8372 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0930 10:24:18.516892 8372 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0930 10:24:18.518744 8372 out.go:177] * Enabled addons: volcano, nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0930 10:24:18.520508 8372 addons.go:510] duration metric: took 2m47.220466078s for enable addons: enabled=[volcano nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0930 10:24:18.520569 8372 start.go:246] waiting for cluster config update ...
I0930 10:24:18.520597 8372 start.go:255] writing updated cluster config ...
I0930 10:24:18.520889 8372 ssh_runner.go:195] Run: rm -f paused
I0930 10:24:18.845340 8372 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0930 10:24:18.847736 8372 out.go:177] * Done! kubectl is now configured to use "addons-703944" cluster and "default" namespace by default
==> Docker <==
Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.530275943Z" level=info msg="ignoring event" container=b01247b84ab8a9df4b46e494d1f77dd0dbf2c5926a31ae9e2cc811b02838c544 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.672394655Z" level=info msg="ignoring event" container=fd4f358ae1829e2bd243d474b8171777e310986306967fea9a63228dbe11aa93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.724696676Z" level=info msg="ignoring event" container=20c26c82689fcb72554b438b52b5e1a578bef0ab822a0096123a1918df0bb8ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:33:56 addons-703944 dockerd[1288]: time="2024-09-30T10:33:56.267493316Z" level=info msg="ignoring event" container=a5864276b4f3d638ed913defe38c88eb8b6590deb2d3c1b1564168723aa9a8b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:33:56 addons-703944 dockerd[1288]: time="2024-09-30T10:33:56.417644275Z" level=info msg="ignoring event" container=c5e388071a29f6149e9e1bd1495739173a415a09542cf5a28f880736bbdee644 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:33:57 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:33:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ebd48ad77ad7af597184897923c29db8fc520cd616b26dced6b371ae0befcb5/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 30 10:33:57 addons-703944 dockerd[1288]: time="2024-09-30T10:33:57.302079247Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=4c8efa3e33433a7f traceID=09b40a6d890d1086716807a4bbe31f4b
Sep 30 10:33:57 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:33:57Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 30 10:33:58 addons-703944 dockerd[1288]: time="2024-09-30T10:33:58.020613476Z" level=info msg="ignoring event" container=625ac88b7fd165338ab8fdccbfc4cd1b244052dd26eeb9d4da58d01e052acc84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.058647495Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a26cf2d551a02c88 traceID=3eee85ae8207c459dda9bd736a893e4b
Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.062362789Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a26cf2d551a02c88 traceID=3eee85ae8207c459dda9bd736a893e4b
Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.157282436Z" level=info msg="ignoring event" container=3ebd48ad77ad7af597184897923c29db8fc520cd616b26dced6b371ae0befcb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:02 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2713756d0b4ea1ce2193817f5964ca2034aaba39f2a28fa26666b182a21b13c6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 30 10:34:02 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:02Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
Sep 30 10:34:02 addons-703944 dockerd[1288]: time="2024-09-30T10:34:02.951263898Z" level=info msg="ignoring event" container=580672e782c5bd5a16a4318b576d4298676fb385fc0e78e57e4f9b9e9bfd9ba9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:04 addons-703944 dockerd[1288]: time="2024-09-30T10:34:04.300651020Z" level=info msg="ignoring event" container=2713756d0b4ea1ce2193817f5964ca2034aaba39f2a28fa26666b182a21b13c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:05 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 30 10:34:06 addons-703944 dockerd[1288]: time="2024-09-30T10:34:06.189085606Z" level=info msg="ignoring event" container=a602b281f0c43351f13dcabf9760187c497ff3580de5377efed475dd2a3a811f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:07 addons-703944 dockerd[1288]: time="2024-09-30T10:34:07.364642423Z" level=info msg="ignoring event" container=3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:15 addons-703944 dockerd[1288]: time="2024-09-30T10:34:15.526430638Z" level=info msg="ignoring event" container=9d8a251cdc182765f1b6afefb4ef602279f9a8002dc2a359d7ff4cff4d610403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.222330179Z" level=info msg="ignoring event" container=8ff34a9a05ef2b99e0385cd38068272c1da8ac2ac4042e5f40e6d69ea7e24829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.278810636Z" level=info msg="ignoring event" container=4d968f7e8c938ada722f733a6fcef97b5da7b2c4fdba2828ae041467ae711d62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.498128732Z" level=info msg="ignoring event" container=7ef5b486717bc51d996fab293bd8cfac2a52478290cb639fd105a2f59f7989f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:34:16 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:16Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-66c9cd494c-rdvzj_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7ef5b486717bc51d996fab293bd8cfac2a52478290cb639fd105a2f59f7989f2\""
Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.609641796Z" level=info msg="ignoring event" container=dab57879dd7e2b105b50f69dd335cdc41c0f6b44ac15bc924eb44794da721f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a602b281f0c43 fc9db2894f4e4 11 seconds ago Exited helper-pod 0 3ac43b835fe5c helper-pod-delete-pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4
580672e782c5b busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140 15 seconds ago Exited busybox 0 2713756d0b4ea test-local-path
625ac88b7fd16 busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 20 seconds ago Exited helper-pod 0 3ebd48ad77ad7 helper-pod-create-pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4
e56e6fc59f851 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 8b9832f30788a gcp-auth-89d5ffd79-qbk9q
b500e50fd74d0 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 e84d22f08f9c9 ingress-nginx-controller-bc57996ff-fhcnz
7ff05e7f77cd3 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 12 minutes ago Exited patch 0 d611ed1c8740f ingress-nginx-admission-patch-gwx9r
4d3c2a6042618 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 12 minutes ago Exited create 0 ba84e1feed742 ingress-nginx-admission-create-9prwn
b124d58586438 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 12 minutes ago Running gadget 0 e230ab8541b09 gadget-7txl9
cc02ee2dc585e rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 23482939aa7e5 local-path-provisioner-86d989889c-2w5jv
749c096625179 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 30e12a384193e metrics-server-84c5f94fbc-72src
4d968f7e8c938 gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982 12 minutes ago Exited registry-proxy 0 dab57879dd7e2 registry-proxy-ggxvp
8ff34a9a05ef2 registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 12 minutes ago Exited registry 0 7ef5b486717bc registry-66c9cd494c-rdvzj
7701316bb1b7d gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 b063e0b2bb844 kube-ingress-dns-minikube
f0dc82196b031 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 e3cde4c3830b7 cloud-spanner-emulator-5b584cc74-zl2c5
9b7883641a7b6 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 51a83c3cfcd39 storage-provisioner
471d0bb84337c 2f6c962e7b831 12 minutes ago Running coredns 0 a4f793993c5fd coredns-7c65d6cfc9-whncm
cf5b880fad343 24a140c548c07 12 minutes ago Running kube-proxy 0 221070f364cb4 kube-proxy-xl4mj
8fe61e0b6c18a 279f381cb3736 12 minutes ago Running kube-controller-manager 0 6c49a86599975 kube-controller-manager-addons-703944
32399c9ffe928 27e3830e14027 12 minutes ago Running etcd 0 8ed76639343d4 etcd-addons-703944
bd7f169d8e3e5 d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 b374e3e99a214 kube-apiserver-addons-703944
9f4afc2251bd6 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 a91275cb4ae88 kube-scheduler-addons-703944
==> controller_ingress [b500e50fd74d] <==
W0930 10:22:51.031771 6 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0930 10:22:51.031919 6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0930 10:22:51.040996 6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0930 10:22:51.473317 6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0930 10:22:51.489287 6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0930 10:22:51.498831 6 nginx.go:271] "Starting NGINX Ingress controller"
I0930 10:22:51.508862 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"12220eba-5361-4cf1-a44f-13cb77cc563b", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0930 10:22:51.517189 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"50ebfe59-347b-4363-a4c9-597f183a62d8", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0930 10:22:51.517380 6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"e3c27bc0-7211-467e-ab56-b712b31992b9", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0930 10:22:52.700863 6 nginx.go:317] "Starting NGINX process"
I0930 10:22:52.701115 6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0930 10:22:52.701591 6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0930 10:22:52.706635 6 controller.go:193] "Configuration changes detected, backend reload required"
I0930 10:22:52.720230 6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0930 10:22:52.720727 6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-fhcnz"
I0930 10:22:52.729684 6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-fhcnz" node="addons-703944"
I0930 10:22:52.752734 6 controller.go:213] "Backend successfully reloaded"
I0930 10:22:52.752945 6 controller.go:224] "Initial sync, sleeping for 1 second"
I0930 10:22:52.753477 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-fhcnz", UID:"475b22d0-6c5a-4aab-9cf1-9d3ebaf78a75", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
==> coredns [471d0bb84337] <==
[INFO] 10.244.0.7:49312 - 29993 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000102874s
[INFO] 10.244.0.7:49312 - 42514 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002011485s
[INFO] 10.244.0.7:49312 - 7545 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002191642s
[INFO] 10.244.0.7:49312 - 31021 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00012749s
[INFO] 10.244.0.7:49312 - 64634 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102637s
[INFO] 10.244.0.7:52056 - 13052 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131158s
[INFO] 10.244.0.7:52056 - 13248 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000078727s
[INFO] 10.244.0.7:44412 - 15106 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046941s
[INFO] 10.244.0.7:44412 - 15559 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064245s
[INFO] 10.244.0.7:34827 - 23431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000198308s
[INFO] 10.244.0.7:34827 - 23587 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057476s
[INFO] 10.244.0.7:37206 - 40747 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001210437s
[INFO] 10.244.0.7:37206 - 41208 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001179422s
[INFO] 10.244.0.7:53392 - 17918 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067011s
[INFO] 10.244.0.7:53392 - 18073 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069086s
[INFO] 10.244.0.25:43132 - 54509 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283488s
[INFO] 10.244.0.25:59003 - 27505 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151094s
[INFO] 10.244.0.25:49456 - 63764 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117175s
[INFO] 10.244.0.25:58881 - 43085 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076724s
[INFO] 10.244.0.25:59699 - 18237 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110151s
[INFO] 10.244.0.25:41399 - 9170 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070661s
[INFO] 10.244.0.25:60955 - 30379 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002580124s
[INFO] 10.244.0.25:45346 - 11744 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007987233s
[INFO] 10.244.0.25:32796 - 27339 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001568948s
[INFO] 10.244.0.25:40716 - 63008 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001741899s
==> describe nodes <==
Name: addons-703944
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-703944
kubernetes.io/os=linux
minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
minikube.k8s.io/name=addons-703944
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-703944
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 30 Sep 2024 10:21:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-703944
AcquireTime: <unset>
RenewTime: Mon, 30 Sep 2024 10:34:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 30 Sep 2024 10:30:06 +0000 Mon, 30 Sep 2024 10:21:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 30 Sep 2024 10:30:06 +0000 Mon, 30 Sep 2024 10:21:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 30 Sep 2024 10:30:06 +0000 Mon, 30 Sep 2024 10:21:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 30 Sep 2024 10:30:06 +0000 Mon, 30 Sep 2024 10:21:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-703944
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 976b457a99284b958149a831017d514d
System UUID: c8b66987-d94a-48ea-9059-80a29a142280
Boot ID: 12064027-174b-4ce0-8a4a-48eaa21ecbf6
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m18s
default cloud-spanner-emulator-5b584cc74-zl2c5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-7txl9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-qbk9q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-fhcnz 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-whncm 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-703944 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-703944 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-703944 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-xl4mj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-703944 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-72src 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-2w5jv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal NodeAllocatableEnforced 13m kubelet Updated Node Allocatable limit across pods
Normal Starting 13m kubelet Starting kubelet.
Warning CgroupV1 13m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeHasSufficientMemory 12m (x8 over 13m) kubelet Node addons-703944 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 12m (x7 over 13m) kubelet Node addons-703944 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 12m (x7 over 13m) kubelet Node addons-703944 status is now: NodeHasNoDiskPressure
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-703944 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-703944 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-703944 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-703944 event: Registered Node addons-703944 in Controller
==> dmesg <==
[Sep30 10:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014927] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.458782] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.064452] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.020217] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.681870] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.380136] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [32399c9ffe92] <==
{"level":"info","ts":"2024-09-30T10:21:18.933375Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-30T10:21:18.933385Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-30T10:21:19.602529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-30T10:21:19.602586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-30T10:21:19.602621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-30T10:21:19.602860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-30T10:21:19.602949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-30T10:21:19.603063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-30T10:21:19.603171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-30T10:21:19.607726Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-703944 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-30T10:21:19.609625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-30T10:21:19.609999Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:19.613574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-30T10:21:19.615561Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-30T10:21:19.615686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-30T10:21:19.615957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-30T10:21:19.616489Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-30T10:21:19.616837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-30T10:21:19.616944Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:19.617009Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:19.617038Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:19.617758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-30T10:31:21.059348Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1855}
{"level":"info","ts":"2024-09-30T10:31:21.108697Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1855,"took":"48.522977ms","hash":2943355290,"current-db-size-bytes":8835072,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4743168,"current-db-size-in-use":"4.7 MB"}
{"level":"info","ts":"2024-09-30T10:31:21.108748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2943355290,"revision":1855,"compact-revision":-1}
==> gcp-auth [e56e6fc59f85] <==
2024/09/30 10:24:17 GCP Auth Webhook started!
2024/09/30 10:24:35 Ready to marshal response ...
2024/09/30 10:24:35 Ready to write response ...
2024/09/30 10:24:35 Ready to marshal response ...
2024/09/30 10:24:35 Ready to write response ...
2024/09/30 10:24:59 Ready to marshal response ...
2024/09/30 10:24:59 Ready to write response ...
2024/09/30 10:24:59 Ready to marshal response ...
2024/09/30 10:24:59 Ready to write response ...
2024/09/30 10:25:00 Ready to marshal response ...
2024/09/30 10:25:00 Ready to write response ...
2024/09/30 10:33:15 Ready to marshal response ...
2024/09/30 10:33:15 Ready to write response ...
2024/09/30 10:33:23 Ready to marshal response ...
2024/09/30 10:33:23 Ready to write response ...
2024/09/30 10:33:32 Ready to marshal response ...
2024/09/30 10:33:32 Ready to write response ...
2024/09/30 10:33:56 Ready to marshal response ...
2024/09/30 10:33:56 Ready to write response ...
2024/09/30 10:33:56 Ready to marshal response ...
2024/09/30 10:33:56 Ready to write response ...
2024/09/30 10:34:05 Ready to marshal response ...
2024/09/30 10:34:05 Ready to write response ...
==> kernel <==
10:34:17 up 16 min, 0 users, load average: 2.29, 1.08, 0.72
Linux addons-703944 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [bd7f169d8e3e] <==
I0930 10:24:49.654459 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0930 10:24:50.033075 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0930 10:24:50.118747 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0930 10:24:50.357736 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0930 10:24:50.460773 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0930 10:24:50.655313 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0930 10:24:50.743787 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0930 10:24:50.872349 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0930 10:24:50.976768 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0930 10:24:51.358383 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0930 10:24:51.456955 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0930 10:33:29.315948 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0930 10:33:49.317208 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0930 10:33:49.317255 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0930 10:33:49.350071 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0930 10:33:49.350325 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0930 10:33:49.360690 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0930 10:33:49.360738 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0930 10:33:49.385490 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0930 10:33:49.385542 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0930 10:33:49.418462 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0930 10:33:49.418500 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0930 10:33:50.361596 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0930 10:33:50.418080 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W0930 10:33:50.496229 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
==> kube-controller-manager [8fe61e0b6c18] <==
E0930 10:33:53.713971 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:33:53.952917 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:33:53.952968 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:33:57.488331 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:33:57.488376 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:33:58.287352 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:33:58.287398 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:33:59.987619 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:33:59.987665 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0930 10:34:00.816577 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0930 10:34:00.816711 1 shared_informer.go:320] Caches are synced for resource quota
I0930 10:34:01.070530 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0930 10:34:01.070584 1 shared_informer.go:320] Caches are synced for garbage collector
W0930 10:34:05.151519 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:34:05.151594 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0930 10:34:05.967509 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="8.787µs"
W0930 10:34:06.432730 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:34:06.432771 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:34:07.136754 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:34:07.136800 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:34:10.275694 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:34:10.275734 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:34:12.794660 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:34:12.794704 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0930 10:34:16.101514 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.76µs"
==> kube-proxy [cf5b880fad34] <==
I0930 10:21:32.314157 1 server_linux.go:66] "Using iptables proxy"
I0930 10:21:32.422183 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0930 10:21:32.422240 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0930 10:21:32.448738 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0930 10:21:32.448809 1 server_linux.go:169] "Using iptables Proxier"
I0930 10:21:32.450714 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0930 10:21:32.451032 1 server.go:483] "Version info" version="v1.31.1"
I0930 10:21:32.451048 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0930 10:21:32.453292 1 config.go:199] "Starting service config controller"
I0930 10:21:32.453333 1 shared_informer.go:313] Waiting for caches to sync for service config
I0930 10:21:32.453364 1 config.go:105] "Starting endpoint slice config controller"
I0930 10:21:32.453375 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0930 10:21:32.465704 1 config.go:328] "Starting node config controller"
I0930 10:21:32.465723 1 shared_informer.go:313] Waiting for caches to sync for node config
I0930 10:21:32.554600 1 shared_informer.go:320] Caches are synced for service config
I0930 10:21:32.554710 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0930 10:21:32.566742 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [9f4afc2251bd] <==
E0930 10:21:23.704809 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0930 10:21:23.704779 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.704940 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0930 10:21:23.704964 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705114 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0930 10:21:23.705273 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0930 10:21:23.705386 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0930 10:21:23.705660 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705800 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0930 10:21:23.706917 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705875 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0930 10:21:23.706973 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705912 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0930 10:21:23.706994 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705920 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0930 10:21:23.707013 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.705994 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0930 10:21:23.707057 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.706032 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0930 10:21:23.707102 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.706088 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0930 10:21:23.707129 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.573771 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0930 10:21:24.573811 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0930 10:21:24.995310 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623286 2330 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3283b04e-4ea2-4110-966e-2e42c30b934a-data\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623336 2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x6vj2\" (UniqueName: \"kubernetes.io/projected/3283b04e-4ea2-4110-966e-2e42c30b934a-kube-api-access-x6vj2\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623349 2330 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3283b04e-4ea2-4110-966e-2e42c30b934a-script\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623359 2330 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3283b04e-4ea2-4110-966e-2e42c30b934a-gcp-creds\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:08 addons-703944 kubelet[2330]: I0930 10:34:08.287288 2330 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c"
Sep 30 10:34:08 addons-703944 kubelet[2330]: E0930 10:34:08.824841 2330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f5cee3a4-bea9-470a-ace6-39db000ad219"
Sep 30 10:34:11 addons-703944 kubelet[2330]: I0930 10:34:11.834311 2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3283b04e-4ea2-4110-966e-2e42c30b934a" path="/var/lib/kubelet/pods/3283b04e-4ea2-4110-966e-2e42c30b934a/volumes"
Sep 30 10:34:12 addons-703944 kubelet[2330]: E0930 10:34:12.825267 2330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="f7c8a150-0489-4468-a63f-4623d31323a7"
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.674765 2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds\") pod \"f7c8a150-0489-4468-a63f-4623d31323a7\" (UID: \"f7c8a150-0489-4468-a63f-4623d31323a7\") "
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.674845 2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpdn\" (UniqueName: \"kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn\") pod \"f7c8a150-0489-4468-a63f-4623d31323a7\" (UID: \"f7c8a150-0489-4468-a63f-4623d31323a7\") "
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.675270 2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f7c8a150-0489-4468-a63f-4623d31323a7" (UID: "f7c8a150-0489-4468-a63f-4623d31323a7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.679451 2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn" (OuterVolumeSpecName: "kube-api-access-2lpdn") pod "f7c8a150-0489-4468-a63f-4623d31323a7" (UID: "f7c8a150-0489-4468-a63f-4623d31323a7"). InnerVolumeSpecName "kube-api-access-2lpdn". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.775222 2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2lpdn\" (UniqueName: \"kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.775268 2330 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.682772 2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kg4f\" (UniqueName: \"kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f\") pod \"1071ed50-a346-48af-bd60-fb6e526e1d58\" (UID: \"1071ed50-a346-48af-bd60-fb6e526e1d58\") "
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.688622 2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f" (OuterVolumeSpecName: "kube-api-access-4kg4f") pod "1071ed50-a346-48af-bd60-fb6e526e1d58" (UID: "1071ed50-a346-48af-bd60-fb6e526e1d58"). InnerVolumeSpecName "kube-api-access-4kg4f". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.783705 2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjg79\" (UniqueName: \"kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79\") pod \"a0c7860c-3f6b-40f2-9761-cd6466b5e812\" (UID: \"a0c7860c-3f6b-40f2-9761-cd6466b5e812\") "
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.784156 2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4kg4f\" (UniqueName: \"kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.785700 2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79" (OuterVolumeSpecName: "kube-api-access-pjg79") pod "a0c7860c-3f6b-40f2-9761-cd6466b5e812" (UID: "a0c7860c-3f6b-40f2-9761-cd6466b5e812"). InnerVolumeSpecName "kube-api-access-pjg79". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.884537 2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pjg79\" (UniqueName: \"kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79\") on node \"addons-703944\" DevicePath \"\""
Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.716715 2330 scope.go:117] "RemoveContainer" containerID="4d968f7e8c938ada722f733a6fcef97b5da7b2c4fdba2828ae041467ae711d62"
Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.783182 2330 scope.go:117] "RemoveContainer" containerID="8ff34a9a05ef2b99e0385cd38068272c1da8ac2ac4042e5f40e6d69ea7e24829"
Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.837436 2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1071ed50-a346-48af-bd60-fb6e526e1d58" path="/var/lib/kubelet/pods/1071ed50-a346-48af-bd60-fb6e526e1d58/volumes"
Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.837821 2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c7860c-3f6b-40f2-9761-cd6466b5e812" path="/var/lib/kubelet/pods/a0c7860c-3f6b-40f2-9761-cd6466b5e812/volumes"
Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.838196 2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c8a150-0489-4468-a63f-4623d31323a7" path="/var/lib/kubelet/pods/f7c8a150-0489-4468-a63f-4623d31323a7/volumes"
==> storage-provisioner [9b7883641a7b] <==
I0930 10:21:38.706421 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0930 10:21:38.722907 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0930 10:21:38.722958 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0930 10:21:38.734690 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0930 10:21:38.736897 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc!
I0930 10:21:38.747241 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7c375b4-4b70-4b95-ad2d-54a4fbae59e9", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc became leader
I0930 10:21:38.838085 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-703944 -n addons-703944
helpers_test.go:261: (dbg) Run: kubectl --context addons-703944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r: exit status 1 (105.395608ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-703944/192.168.49.2
Start Time: Mon, 30 Sep 2024 10:24:59 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hwfpp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hwfpp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m19s default-scheduler Successfully assigned default/busybox to addons-703944
Warning Failed 7m56s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal Pulling 7m41s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m40s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m40s (x4 over 9m18s) kubelet Error: ErrImagePull
Normal BackOff 4m9s (x21 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-9prwn" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-gwx9r" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.49s)