=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 6.702708ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-5nsr7" [489c11e2-9ffb-44d0-ab77-26a06d440d24] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00504778s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6hk9r" [7c97bbfc-93ec-48a1-aeb1-7e1f322373db] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004516561s
addons_test.go:338: (dbg) Run: kubectl --context addons-093926 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-093926 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-093926 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.111936976s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-093926 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-arm64 -p addons-093926 ip
2024/09/23 23:51:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-arm64 -p addons-093926 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-093926
helpers_test.go:235: (dbg) docker inspect addons-093926:
-- stdout --
[
{
"Id": "4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828",
"Created": "2024-09-23T23:38:18.363211563Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8796,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-23T23:38:18.559285971Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
"ResolvConfPath": "/var/lib/docker/containers/4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828/hostname",
"HostsPath": "/var/lib/docker/containers/4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828/hosts",
"LogPath": "/var/lib/docker/containers/4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828/4acc8540e5df4fccb2a5a3ca15f367a0342ba5626c5ea87bbd1e5fc7eba12828-json.log",
"Name": "/addons-093926",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-093926:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-093926",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b491be732b7149ca6fc0d9874fb3f0fbd9f8ebdff55d512887b407ca1a36a848-init/diff:/var/lib/docker/overlay2/c163e1d244454ef137d50729a5d3136380e9198c02303a189f59bba4dcfdb723/diff",
"MergedDir": "/var/lib/docker/overlay2/b491be732b7149ca6fc0d9874fb3f0fbd9f8ebdff55d512887b407ca1a36a848/merged",
"UpperDir": "/var/lib/docker/overlay2/b491be732b7149ca6fc0d9874fb3f0fbd9f8ebdff55d512887b407ca1a36a848/diff",
"WorkDir": "/var/lib/docker/overlay2/b491be732b7149ca6fc0d9874fb3f0fbd9f8ebdff55d512887b407ca1a36a848/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-093926",
"Source": "/var/lib/docker/volumes/addons-093926/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-093926",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-093926",
"name.minikube.sigs.k8s.io": "addons-093926",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "11dc1aac91fe7f8c13c47d7d85f46d104e751244bb4f5cfbdac40e1e8758ec9a",
"SandboxKey": "/var/run/docker/netns/11dc1aac91fe",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-093926": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "c660da7bdf3c0c274e268cb2ac1ab02b4990db466112e2d4dceaf6cc0240805f",
"EndpointID": "bed6770423097cf7c400f2b6c82a1678094d6c912e5f684fc92161adead1a986",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-093926",
"4acc8540e5df"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-093926 -n addons-093926
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-093926 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-093926 logs -n 25: (1.14669408s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-914233 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | -p download-only-914233 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p download-only-914233 | download-only-914233 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | -o=json --download-only | download-only-287226 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | -p download-only-287226 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p download-only-287226 | download-only-287226 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p download-only-914233 | download-only-914233 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p download-only-287226 | download-only-287226 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | --download-only -p | download-docker-598588 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | download-docker-598588 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-598588 | download-docker-598588 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | --download-only -p | binary-mirror-966037 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | binary-mirror-966037 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:42025 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-966037 | binary-mirror-966037 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| addons | enable dashboard -p | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | addons-093926 | | | | | |
| addons | disable dashboard -p | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | addons-093926 | | | | | |
| start | -p addons-093926 --wait=true | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:41 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-093926 addons disable | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:42 UTC | 23 Sep 24 23:42 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-093926 addons disable | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-093926 addons | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093926 addons | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-093926 ip | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
| addons | addons-093926 addons disable | addons-093926 | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 23:37:54
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 23:37:54.976304 8299 out.go:345] Setting OutFile to fd 1 ...
I0923 23:37:54.976506 8299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:37:54.976532 8299 out.go:358] Setting ErrFile to fd 2...
I0923 23:37:54.976551 8299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:37:54.976813 8299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-2224/.minikube/bin
I0923 23:37:54.977319 8299 out.go:352] Setting JSON to false
I0923 23:37:54.978193 8299 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1220,"bootTime":1727133455,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0923 23:37:54.978287 8299 start.go:139] virtualization:
I0923 23:37:54.980885 8299 out.go:177] * [addons-093926] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0923 23:37:54.983034 8299 out.go:177] - MINIKUBE_LOCATION=19696
I0923 23:37:54.983182 8299 notify.go:220] Checking for updates...
I0923 23:37:54.987233 8299 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 23:37:54.989207 8299 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19696-2224/kubeconfig
I0923 23:37:54.991136 8299 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-2224/.minikube
I0923 23:37:54.992934 8299 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0923 23:37:54.994869 8299 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 23:37:54.996914 8299 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 23:37:55.042968 8299 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0923 23:37:55.043088 8299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 23:37:55.095085 8299 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 23:37:55.085094439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 23:37:55.095201 8299 docker.go:318] overlay module found
I0923 23:37:55.097533 8299 out.go:177] * Using the docker driver based on user configuration
I0923 23:37:55.099461 8299 start.go:297] selected driver: docker
I0923 23:37:55.099482 8299 start.go:901] validating driver "docker" against <nil>
I0923 23:37:55.099497 8299 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 23:37:55.100155 8299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 23:37:55.153843 8299 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 23:37:55.144544446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 23:37:55.154061 8299 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 23:37:55.154305 8299 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 23:37:55.156375 8299 out.go:177] * Using Docker driver with root privileges
I0923 23:37:55.158291 8299 cni.go:84] Creating CNI manager for ""
I0923 23:37:55.158368 8299 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:37:55.158383 8299 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0923 23:37:55.158467 8299 start.go:340] cluster config:
{Name:addons-093926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 23:37:55.160625 8299 out.go:177] * Starting "addons-093926" primary control-plane node in "addons-093926" cluster
I0923 23:37:55.162903 8299 cache.go:121] Beginning downloading kic base image for docker with docker
I0923 23:37:55.164640 8299 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
I0923 23:37:55.166690 8299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 23:37:55.166754 8299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 23:37:55.166766 8299 cache.go:56] Caching tarball of preloaded images
I0923 23:37:55.166770 8299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
I0923 23:37:55.166846 8299 preload.go:172] Found /home/jenkins/minikube-integration/19696-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 23:37:55.166856 8299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 23:37:55.167230 8299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/config.json ...
I0923 23:37:55.167264 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/config.json: {Name:mk05342ee46a48f6b24ca2b046452d3107f19b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:37:55.182752 8299 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
I0923 23:37:55.182850 8299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
I0923 23:37:55.182873 8299 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
I0923 23:37:55.182881 8299 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
I0923 23:37:55.182889 8299 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
I0923 23:37:55.182898 8299 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
I0923 23:38:12.137194 8299 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
I0923 23:38:12.137234 8299 cache.go:194] Successfully downloaded all kic artifacts
I0923 23:38:12.137271 8299 start.go:360] acquireMachinesLock for addons-093926: {Name:mk9e9b9eb75f47e5d9b45153365b05786713d5cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 23:38:12.137461 8299 start.go:364] duration metric: took 152.87µs to acquireMachinesLock for "addons-093926"
I0923 23:38:12.137497 8299 start.go:93] Provisioning new machine with config: &{Name:addons-093926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093926 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 23:38:12.137579 8299 start.go:125] createHost starting for "" (driver="docker")
I0923 23:38:12.140998 8299 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0923 23:38:12.141261 8299 start.go:159] libmachine.API.Create for "addons-093926" (driver="docker")
I0923 23:38:12.141297 8299 client.go:168] LocalClient.Create starting
I0923 23:38:12.141445 8299 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem
I0923 23:38:12.489304 8299 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/cert.pem
I0923 23:38:12.850963 8299 cli_runner.go:164] Run: docker network inspect addons-093926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 23:38:12.866046 8299 cli_runner.go:211] docker network inspect addons-093926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 23:38:12.866135 8299 network_create.go:284] running [docker network inspect addons-093926] to gather additional debugging logs...
I0923 23:38:12.866155 8299 cli_runner.go:164] Run: docker network inspect addons-093926
W0923 23:38:12.881249 8299 cli_runner.go:211] docker network inspect addons-093926 returned with exit code 1
I0923 23:38:12.881286 8299 network_create.go:287] error running [docker network inspect addons-093926]: docker network inspect addons-093926: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-093926 not found
I0923 23:38:12.881301 8299 network_create.go:289] output of [docker network inspect addons-093926]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-093926 not found
** /stderr **
I0923 23:38:12.881472 8299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 23:38:12.897063 8299 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001776870}
I0923 23:38:12.897110 8299 network_create.go:124] attempt to create docker network addons-093926 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 23:38:12.897167 8299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-093926 addons-093926
I0923 23:38:12.965595 8299 network_create.go:108] docker network addons-093926 192.168.49.0/24 created
I0923 23:38:12.965628 8299 kic.go:121] calculated static IP "192.168.49.2" for the "addons-093926" container
I0923 23:38:12.965704 8299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0923 23:38:12.981036 8299 cli_runner.go:164] Run: docker volume create addons-093926 --label name.minikube.sigs.k8s.io=addons-093926 --label created_by.minikube.sigs.k8s.io=true
I0923 23:38:12.997556 8299 oci.go:103] Successfully created a docker volume addons-093926
I0923 23:38:12.997652 8299 cli_runner.go:164] Run: docker run --rm --name addons-093926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093926 --entrypoint /usr/bin/test -v addons-093926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
I0923 23:38:14.522686 8299 cli_runner.go:217] Completed: docker run --rm --name addons-093926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093926 --entrypoint /usr/bin/test -v addons-093926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (1.524994845s)
I0923 23:38:14.522716 8299 oci.go:107] Successfully prepared a docker volume addons-093926
I0923 23:38:14.522742 8299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 23:38:14.522761 8299 kic.go:194] Starting extracting preloaded images to volume ...
I0923 23:38:14.522836 8299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-093926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
I0923 23:38:18.298369 8299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-093926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.775478632s)
I0923 23:38:18.298402 8299 kic.go:203] duration metric: took 3.775637228s to extract preloaded images to volume ...
W0923 23:38:18.298539 8299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0923 23:38:18.298647 8299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0923 23:38:18.348971 8299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-093926 --name addons-093926 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093926 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-093926 --network addons-093926 --ip 192.168.49.2 --volume addons-093926:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
I0923 23:38:18.720605 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Running}}
I0923 23:38:18.744329 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:18.770762 8299 cli_runner.go:164] Run: docker exec addons-093926 stat /var/lib/dpkg/alternatives/iptables
I0923 23:38:18.836776 8299 oci.go:144] the created container "addons-093926" has a running status.
I0923 23:38:18.836805 8299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa...
I0923 23:38:19.282543 8299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0923 23:38:19.304484 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:19.330267 8299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0923 23:38:19.330286 8299 kic_runner.go:114] Args: [docker exec --privileged addons-093926 chown docker:docker /home/docker/.ssh/authorized_keys]
I0923 23:38:19.404785 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:19.425530 8299 machine.go:93] provisionDockerMachine start ...
I0923 23:38:19.425624 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:19.449357 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:19.449936 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:19.449952 8299 main.go:141] libmachine: About to run SSH command:
hostname
I0923 23:38:19.605258 8299 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093926
I0923 23:38:19.605329 8299 ubuntu.go:169] provisioning hostname "addons-093926"
I0923 23:38:19.605443 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:19.623825 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:19.624059 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:19.624072 8299 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-093926 && echo "addons-093926" | sudo tee /etc/hostname
I0923 23:38:19.773094 8299 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093926
I0923 23:38:19.773179 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:19.792753 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:19.792990 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:19.793007 8299 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-093926' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093926/g' /etc/hosts;
else
echo '127.0.1.1 addons-093926' | sudo tee -a /etc/hosts;
fi
fi
I0923 23:38:19.925340 8299 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0923 23:38:19.925438 8299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19696-2224/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-2224/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-2224/.minikube}
I0923 23:38:19.925480 8299 ubuntu.go:177] setting up certificates
I0923 23:38:19.925514 8299 provision.go:84] configureAuth start
I0923 23:38:19.925619 8299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093926
I0923 23:38:19.942153 8299 provision.go:143] copyHostCerts
I0923 23:38:19.942234 8299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-2224/.minikube/ca.pem (1082 bytes)
I0923 23:38:19.942373 8299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-2224/.minikube/cert.pem (1123 bytes)
I0923 23:38:19.942441 8299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-2224/.minikube/key.pem (1675 bytes)
I0923 23:38:19.942502 8299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-2224/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca-key.pem org=jenkins.addons-093926 san=[127.0.0.1 192.168.49.2 addons-093926 localhost minikube]
I0923 23:38:20.119816 8299 provision.go:177] copyRemoteCerts
I0923 23:38:20.119889 8299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0923 23:38:20.119930 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:20.140726 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:20.234318 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0923 23:38:20.258013 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0923 23:38:20.281855 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0923 23:38:20.305350 8299 provision.go:87] duration metric: took 379.804339ms to configureAuth
I0923 23:38:20.305374 8299 ubuntu.go:193] setting minikube options for container-runtime
I0923 23:38:20.305584 8299 config.go:182] Loaded profile config "addons-093926": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:38:20.305645 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:20.322806 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:20.323045 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:20.323062 8299 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0923 23:38:20.453557 8299 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0923 23:38:20.453576 8299 ubuntu.go:71] root file system type: overlay
I0923 23:38:20.453693 8299 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0923 23:38:20.453764 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:20.471279 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:20.471517 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:20.471607 8299 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0923 23:38:20.612428 8299 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0923 23:38:20.612508 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:20.629971 8299 main.go:141] libmachine: Using SSH client type: native
I0923 23:38:20.630214 8299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0923 23:38:20.630238 8299 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0923 23:38:21.386336 8299 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-20 11:39:18.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-23 23:38:20.606603794 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0923 23:38:21.386368 8299 machine.go:96] duration metric: took 1.960819157s to provisionDockerMachine
I0923 23:38:21.386381 8299 client.go:171] duration metric: took 9.245072224s to LocalClient.Create
I0923 23:38:21.386395 8299 start.go:167] duration metric: took 9.245136101s to libmachine.API.Create "addons-093926"
I0923 23:38:21.386402 8299 start.go:293] postStartSetup for "addons-093926" (driver="docker")
I0923 23:38:21.386417 8299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 23:38:21.386489 8299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 23:38:21.386539 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:21.403731 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:21.498566 8299 ssh_runner.go:195] Run: cat /etc/os-release
I0923 23:38:21.501782 8299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 23:38:21.501863 8299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 23:38:21.501880 8299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 23:38:21.501887 8299 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0923 23:38:21.501898 8299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-2224/.minikube/addons for local assets ...
I0923 23:38:21.501972 8299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-2224/.minikube/files for local assets ...
I0923 23:38:21.501998 8299 start.go:296] duration metric: took 115.585661ms for postStartSetup
I0923 23:38:21.502313 8299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093926
I0923 23:38:21.519677 8299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/config.json ...
I0923 23:38:21.520050 8299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0923 23:38:21.520107 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:21.536399 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:21.625874 8299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0923 23:38:21.630242 8299 start.go:128] duration metric: took 9.492649313s to createHost
I0923 23:38:21.630271 8299 start.go:83] releasing machines lock for "addons-093926", held for 9.49279369s
I0923 23:38:21.630339 8299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093926
I0923 23:38:21.647155 8299 ssh_runner.go:195] Run: cat /version.json
I0923 23:38:21.647210 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:21.647222 8299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0923 23:38:21.647279 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:21.671637 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:21.690072 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:21.887047 8299 ssh_runner.go:195] Run: systemctl --version
I0923 23:38:21.891273 8299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 23:38:21.895407 8299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0923 23:38:21.921007 8299 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0923 23:38:21.921105 8299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 23:38:21.951153 8299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0923 23:38:21.951179 8299 start.go:495] detecting cgroup driver to use...
I0923 23:38:21.951215 8299 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 23:38:21.951319 8299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 23:38:21.967134 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 23:38:21.976591 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 23:38:21.986095 8299 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 23:38:21.986227 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 23:38:21.996172 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 23:38:22.006664 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 23:38:22.020781 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 23:38:22.030695 8299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 23:38:22.040034 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 23:38:22.050193 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 23:38:22.060021 8299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 23:38:22.070405 8299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 23:38:22.079542 8299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0923 23:38:22.079630 8299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0923 23:38:22.093893 8299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 23:38:22.102647 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:22.190854 8299 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0923 23:38:22.286728 8299 start.go:495] detecting cgroup driver to use...
I0923 23:38:22.286785 8299 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 23:38:22.286845 8299 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0923 23:38:22.300696 8299 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0923 23:38:22.300782 8299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0923 23:38:22.313034 8299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 23:38:22.332172 8299 ssh_runner.go:195] Run: which cri-dockerd
I0923 23:38:22.336669 8299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0923 23:38:22.346527 8299 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0923 23:38:22.369531 8299 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0923 23:38:22.466009 8299 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0923 23:38:22.565933 8299 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0923 23:38:22.566081 8299 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0923 23:38:22.585673 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:22.680692 8299 ssh_runner.go:195] Run: sudo systemctl restart docker
I0923 23:38:22.941503 8299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0923 23:38:22.953433 8299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 23:38:22.965053 8299 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0923 23:38:23.060858 8299 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0923 23:38:23.153250 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:23.238509 8299 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0923 23:38:23.253109 8299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 23:38:23.264465 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:23.350583 8299 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0923 23:38:23.424264 8299 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0923 23:38:23.424352 8299 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0923 23:38:23.428432 8299 start.go:563] Will wait 60s for crictl version
I0923 23:38:23.428494 8299 ssh_runner.go:195] Run: which crictl
I0923 23:38:23.432003 8299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0923 23:38:23.471018 8299 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0923 23:38:23.471088 8299 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 23:38:23.492114 8299 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 23:38:23.516331 8299 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0923 23:38:23.516443 8299 cli_runner.go:164] Run: docker network inspect addons-093926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 23:38:23.532277 8299 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0923 23:38:23.535940 8299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 23:38:23.546591 8299 kubeadm.go:883] updating cluster {Name:addons-093926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 23:38:23.546705 8299 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 23:38:23.546762 8299 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 23:38:23.564402 8299 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 23:38:23.564424 8299 docker.go:615] Images already preloaded, skipping extraction
I0923 23:38:23.564490 8299 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 23:38:23.582600 8299 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 23:38:23.582622 8299 cache_images.go:84] Images are preloaded, skipping loading
I0923 23:38:23.582639 8299 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0923 23:38:23.582754 8299 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-093926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-093926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0923 23:38:23.582831 8299 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0923 23:38:23.624048 8299 cni.go:84] Creating CNI manager for ""
I0923 23:38:23.624077 8299 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:38:23.624090 8299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 23:38:23.624110 8299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093926 NodeName:addons-093926 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 23:38:23.624262 8299 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-093926"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 23:38:23.624332 8299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 23:38:23.632783 8299 binaries.go:44] Found k8s binaries, skipping transfer
I0923 23:38:23.632841 8299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 23:38:23.641470 8299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0923 23:38:23.659747 8299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 23:38:23.677883 8299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0923 23:38:23.696908 8299 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0923 23:38:23.700302 8299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 23:38:23.711044 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:23.808273 8299 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 23:38:23.823810 8299 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926 for IP: 192.168.49.2
I0923 23:38:23.823884 8299 certs.go:194] generating shared ca certs ...
I0923 23:38:23.823923 8299 certs.go:226] acquiring lock for ca certs: {Name:mk2066353a0f9e2eeb8088ba089b2b1912cf6957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:23.824083 8299 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-2224/.minikube/ca.key
I0923 23:38:24.355663 8299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-2224/.minikube/ca.crt ...
I0923 23:38:24.355694 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/ca.crt: {Name:mkf84bd665a52c4621008bfc528ee73085bf01bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:24.355911 8299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-2224/.minikube/ca.key ...
I0923 23:38:24.355926 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/ca.key: {Name:mk697a6c5b01583a410f1b8d8604a1e1a71a00b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:24.356002 8299 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.key
I0923 23:38:24.740630 8299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.crt ...
I0923 23:38:24.740659 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.crt: {Name:mkd2a2148f125d2b571f257ccaa5174a536b98eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:24.740843 8299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.key ...
I0923 23:38:24.740857 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.key: {Name:mkcc6564b23c74b1cbfc1190fc98b81f6b30afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:24.740939 8299 certs.go:256] generating profile certs ...
I0923 23:38:24.741009 8299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.key
I0923 23:38:24.741034 8299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.crt with IP's: []
I0923 23:38:25.228693 8299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.crt ...
I0923 23:38:25.228724 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.crt: {Name:mk23468514537a5d2944cf677890c9aa791f3009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:25.228916 8299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.key ...
I0923 23:38:25.228930 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/client.key: {Name:mk3920ec38dbe6a83f8862a35de1abb1062bdd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:25.229016 8299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key.2837299d
I0923 23:38:25.229040 8299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt.2837299d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0923 23:38:25.429314 8299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt.2837299d ...
I0923 23:38:25.429350 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt.2837299d: {Name:mk00a6a8f357f09ee4e64e81c4569d0c9b86f058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:25.429546 8299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key.2837299d ...
I0923 23:38:25.429561 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key.2837299d: {Name:mk860c14f8aeb61373d3b80114089c7582e51638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:25.429649 8299 certs.go:381] copying /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt.2837299d -> /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt
I0923 23:38:25.429731 8299 certs.go:385] copying /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key.2837299d -> /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key
I0923 23:38:25.429786 8299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.key
I0923 23:38:25.429806 8299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.crt with IP's: []
I0923 23:38:27.285973 8299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.crt ...
I0923 23:38:27.286006 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.crt: {Name:mkd91c7a51a6fd158f838a04ad60c279bdcfbd61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:27.286196 8299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.key ...
I0923 23:38:27.286209 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.key: {Name:mk7550fd75fd6440062cbef5c20284b755adce5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:27.286387 8299 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca-key.pem (1679 bytes)
I0923 23:38:27.286428 8299 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/ca.pem (1082 bytes)
I0923 23:38:27.286458 8299 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/cert.pem (1123 bytes)
I0923 23:38:27.286487 8299 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-2224/.minikube/certs/key.pem (1675 bytes)
I0923 23:38:27.287077 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 23:38:27.311863 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 23:38:27.336255 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 23:38:27.360529 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0923 23:38:27.384452 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0923 23:38:27.408806 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0923 23:38:27.432581 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 23:38:27.458582 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/profiles/addons-093926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0923 23:38:27.482727 8299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-2224/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 23:38:27.508091 8299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 23:38:27.528040 8299 ssh_runner.go:195] Run: openssl version
I0923 23:38:27.534316 8299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 23:38:27.544305 8299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:27.548049 8299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:27.548118 8299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:27.555215 8299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 23:38:27.565226 8299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 23:38:27.569104 8299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 23:38:27.569155 8299 kubeadm.go:392] StartCluster: {Name:addons-093926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 23:38:27.569280 8299 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0923 23:38:27.587108 8299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 23:38:27.595842 8299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 23:38:27.604604 8299 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0923 23:38:27.604671 8299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 23:38:27.613229 8299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 23:38:27.613249 8299 kubeadm.go:157] found existing configuration files:
I0923 23:38:27.613312 8299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 23:38:27.622046 8299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 23:38:27.622115 8299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 23:38:27.630499 8299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 23:38:27.638968 8299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 23:38:27.639034 8299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 23:38:27.647246 8299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 23:38:27.655913 8299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 23:38:27.656009 8299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 23:38:27.664327 8299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 23:38:27.672731 8299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 23:38:27.672796 8299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 23:38:27.680872 8299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0923 23:38:27.725093 8299 kubeadm.go:310] W0923 23:38:27.724390 1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 23:38:27.725844 8299 kubeadm.go:310] W0923 23:38:27.725334 1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 23:38:27.748623 8299 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0923 23:38:27.807680 8299 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 23:38:43.632678 8299 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 23:38:43.632735 8299 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 23:38:43.632821 8299 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0923 23:38:43.632876 8299 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0923 23:38:43.632910 8299 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0923 23:38:43.632954 8299 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0923 23:38:43.633011 8299 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0923 23:38:43.633058 8299 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0923 23:38:43.633106 8299 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0923 23:38:43.633152 8299 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0923 23:38:43.633200 8299 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0923 23:38:43.633245 8299 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0923 23:38:43.633291 8299 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0923 23:38:43.633336 8299 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0923 23:38:43.633427 8299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 23:38:43.633521 8299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 23:38:43.633608 8299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 23:38:43.633669 8299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 23:38:43.635976 8299 out.go:235] - Generating certificates and keys ...
I0923 23:38:43.636118 8299 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 23:38:43.636228 8299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 23:38:43.636384 8299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 23:38:43.636481 8299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 23:38:43.636586 8299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 23:38:43.636665 8299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 23:38:43.636721 8299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 23:38:43.636870 8299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 23:38:43.636944 8299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 23:38:43.637094 8299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 23:38:43.637173 8299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 23:38:43.637243 8299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 23:38:43.637301 8299 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 23:38:43.637366 8299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 23:38:43.637470 8299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 23:38:43.637542 8299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 23:38:43.637602 8299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 23:38:43.637673 8299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 23:38:43.637747 8299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 23:38:43.637878 8299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 23:38:43.637966 8299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 23:38:43.639851 8299 out.go:235] - Booting up control plane ...
I0923 23:38:43.639993 8299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 23:38:43.640074 8299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 23:38:43.640153 8299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 23:38:43.640271 8299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 23:38:43.640380 8299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 23:38:43.640440 8299 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 23:38:43.640612 8299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 23:38:43.640740 8299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 23:38:43.640821 8299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501766074s
I0923 23:38:43.640918 8299 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 23:38:43.640983 8299 kubeadm.go:310] [api-check] The API server is healthy after 6.502361891s
I0923 23:38:43.641122 8299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 23:38:43.641272 8299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 23:38:43.641355 8299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 23:38:43.641607 8299 kubeadm.go:310] [mark-control-plane] Marking the node addons-093926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 23:38:43.641680 8299 kubeadm.go:310] [bootstrap-token] Using token: 50sfvn.27q2kkfryku9u2mt
I0923 23:38:43.643615 8299 out.go:235] - Configuring RBAC rules ...
I0923 23:38:43.643726 8299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 23:38:43.643814 8299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 23:38:43.643959 8299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 23:38:43.644098 8299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 23:38:43.644216 8299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 23:38:43.644305 8299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 23:38:43.644421 8299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 23:38:43.644468 8299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 23:38:43.644517 8299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 23:38:43.644524 8299 kubeadm.go:310]
I0923 23:38:43.644585 8299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 23:38:43.644593 8299 kubeadm.go:310]
I0923 23:38:43.644671 8299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 23:38:43.644679 8299 kubeadm.go:310]
I0923 23:38:43.644705 8299 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 23:38:43.644767 8299 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 23:38:43.644822 8299 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 23:38:43.644830 8299 kubeadm.go:310]
I0923 23:38:43.644884 8299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 23:38:43.644892 8299 kubeadm.go:310]
I0923 23:38:43.644939 8299 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 23:38:43.644947 8299 kubeadm.go:310]
I0923 23:38:43.645005 8299 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 23:38:43.645083 8299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 23:38:43.645154 8299 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 23:38:43.645165 8299 kubeadm.go:310]
I0923 23:38:43.645248 8299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 23:38:43.645327 8299 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 23:38:43.645335 8299 kubeadm.go:310]
I0923 23:38:43.645464 8299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 50sfvn.27q2kkfryku9u2mt \
I0923 23:38:43.645595 8299 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a2c909068b01ea2dee7e7e079daa000bbca85377c6980fa457b5fc9e7e4f0edb \
I0923 23:38:43.645635 8299 kubeadm.go:310] --control-plane
I0923 23:38:43.645650 8299 kubeadm.go:310]
I0923 23:38:43.645778 8299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 23:38:43.645797 8299 kubeadm.go:310]
I0923 23:38:43.645886 8299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 50sfvn.27q2kkfryku9u2mt \
I0923 23:38:43.646002 8299 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a2c909068b01ea2dee7e7e079daa000bbca85377c6980fa457b5fc9e7e4f0edb
I0923 23:38:43.646022 8299 cni.go:84] Creating CNI manager for ""
I0923 23:38:43.646048 8299 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:38:43.649035 8299 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0923 23:38:43.650916 8299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0923 23:38:43.660084 8299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0923 23:38:43.682041 8299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 23:38:43.682171 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:43.682248 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093926 minikube.k8s.io/updated_at=2024_09_23T23_38_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-093926 minikube.k8s.io/primary=true
I0923 23:38:43.821656 8299 ops.go:34] apiserver oom_adj: -16
I0923 23:38:43.821762 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:44.322288 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:44.821932 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:45.322777 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:45.822628 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:46.322563 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:46.822105 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:47.322133 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:47.822120 8299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:38:47.926681 8299 kubeadm.go:1113] duration metric: took 4.244559032s to wait for elevateKubeSystemPrivileges
I0923 23:38:47.926721 8299 kubeadm.go:394] duration metric: took 20.357558813s to StartCluster
I0923 23:38:47.926740 8299 settings.go:142] acquiring lock: {Name:mkef6b1e260366ae38a11088eefb1025db21f714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:47.926865 8299 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19696-2224/kubeconfig
I0923 23:38:47.927240 8299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-2224/kubeconfig: {Name:mk75221f7e4d9f0581d8ac2f8d2e5ae1150624d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:47.927424 8299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 23:38:47.927445 8299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 23:38:47.927655 8299 config.go:182] Loaded profile config "addons-093926": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:38:47.927685 8299 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 23:38:47.927765 8299 addons.go:69] Setting yakd=true in profile "addons-093926"
I0923 23:38:47.927778 8299 addons.go:234] Setting addon yakd=true in "addons-093926"
I0923 23:38:47.927802 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.928237 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.928747 8299 addons.go:69] Setting cloud-spanner=true in profile "addons-093926"
I0923 23:38:47.928772 8299 addons.go:234] Setting addon cloud-spanner=true in "addons-093926"
I0923 23:38:47.928795 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.928817 8299 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093926"
I0923 23:38:47.928879 8299 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093926"
I0923 23:38:47.928936 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.929219 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.929476 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.930648 8299 addons.go:69] Setting default-storageclass=true in profile "addons-093926"
I0923 23:38:47.930671 8299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093926"
I0923 23:38:47.930937 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.928795 8299 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093926"
I0923 23:38:47.937733 8299 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093926"
I0923 23:38:47.937797 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.938295 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.940899 8299 addons.go:69] Setting registry=true in profile "addons-093926"
I0923 23:38:47.940971 8299 addons.go:234] Setting addon registry=true in "addons-093926"
I0923 23:38:47.941050 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.941671 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.946217 8299 addons.go:69] Setting gcp-auth=true in profile "addons-093926"
I0923 23:38:47.946250 8299 mustload.go:65] Loading cluster: addons-093926
I0923 23:38:47.946425 8299 config.go:182] Loaded profile config "addons-093926": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:38:47.946717 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.956039 8299 addons.go:69] Setting storage-provisioner=true in profile "addons-093926"
I0923 23:38:47.956069 8299 addons.go:234] Setting addon storage-provisioner=true in "addons-093926"
I0923 23:38:47.956112 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.956578 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.966596 8299 addons.go:69] Setting ingress=true in profile "addons-093926"
I0923 23:38:47.966690 8299 addons.go:234] Setting addon ingress=true in "addons-093926"
I0923 23:38:47.966766 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:47.972514 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:47.980113 8299 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093926"
I0923 23:38:47.980205 8299 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093926"
I0923 23:38:47.983721 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.008152 8299 addons.go:69] Setting volcano=true in profile "addons-093926"
I0923 23:38:48.008248 8299 addons.go:234] Setting addon volcano=true in "addons-093926"
I0923 23:38:48.008329 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.009032 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.011432 8299 addons.go:69] Setting ingress-dns=true in profile "addons-093926"
I0923 23:38:48.014159 8299 addons.go:234] Setting addon ingress-dns=true in "addons-093926"
I0923 23:38:48.014325 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.022508 8299 addons.go:69] Setting inspektor-gadget=true in profile "addons-093926"
I0923 23:38:48.022594 8299 addons.go:234] Setting addon inspektor-gadget=true in "addons-093926"
I0923 23:38:48.022662 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.023160 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.030080 8299 addons.go:69] Setting volumesnapshots=true in profile "addons-093926"
I0923 23:38:48.030186 8299 addons.go:234] Setting addon volumesnapshots=true in "addons-093926"
I0923 23:38:48.030262 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.030942 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.042886 8299 addons.go:69] Setting metrics-server=true in profile "addons-093926"
I0923 23:38:48.042968 8299 addons.go:234] Setting addon metrics-server=true in "addons-093926"
I0923 23:38:48.043005 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.043631 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.049789 8299 out.go:177] * Verifying Kubernetes components...
I0923 23:38:48.052381 8299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 23:38:48.059223 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.070880 8299 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 23:38:48.071332 8299 out.go:177] - Using image docker.io/registry:2.8.3
I0923 23:38:48.073999 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 23:38:48.074250 8299 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 23:38:48.074267 8299 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 23:38:48.074461 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.105726 8299 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 23:38:48.108190 8299 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 23:38:48.110946 8299 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 23:38:48.110971 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 23:38:48.111050 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.111328 8299 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 23:38:48.111353 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 23:38:48.111425 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.135226 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 23:38:48.137448 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 23:38:48.139479 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 23:38:48.142661 8299 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 23:38:48.145557 8299 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 23:38:48.152814 8299 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093926"
I0923 23:38:48.152853 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.153291 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.177520 8299 addons.go:234] Setting addon default-storageclass=true in "addons-093926"
I0923 23:38:48.177561 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.177973 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:48.181202 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 23:38:48.186900 8299 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 23:38:48.191694 8299 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 23:38:48.191717 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 23:38:48.191893 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.206194 8299 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 23:38:48.217194 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:48.218694 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 23:38:48.218715 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 23:38:48.218764 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.252497 8299 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 23:38:48.281539 8299 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 23:38:48.282120 8299 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 23:38:48.285705 8299 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0923 23:38:48.285952 8299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:38:48.285964 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 23:38:48.286024 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.298680 8299 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0923 23:38:48.302062 8299 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0923 23:38:48.302120 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0923 23:38:48.302223 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.317994 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.323059 8299 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 23:38:48.323083 8299 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0923 23:38:48.337916 8299 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 23:38:48.340005 8299 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 23:38:48.340816 8299 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0923 23:38:48.342051 8299 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 23:38:48.342070 8299 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 23:38:48.342144 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.342341 8299 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 23:38:48.342358 8299 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 23:38:48.342422 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.369133 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 23:38:48.369160 8299 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 23:38:48.369224 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.397679 8299 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 23:38:48.397714 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0923 23:38:48.397785 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.415994 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.416934 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.418099 8299 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0923 23:38:48.418274 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.420021 8299 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 23:38:48.420035 8299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 23:38:48.420093 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.423629 8299 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 23:38:48.423649 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0923 23:38:48.423707 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.432683 8299 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 23:38:48.435271 8299 out.go:177] - Using image docker.io/busybox:stable
I0923 23:38:48.439113 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.441639 8299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 23:38:48.441657 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 23:38:48.441718 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:48.502999 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.503629 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.517539 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.533363 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.541712 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.541859 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.565230 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.569547 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
W0923 23:38:48.572018 8299 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0923 23:38:48.572051 8299 retry.go:31] will retry after 374.020776ms: ssh: handshake failed: EOF
I0923 23:38:48.580735 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:48.974205 8299 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 23:38:48.974282 8299 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 23:38:49.106668 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 23:38:49.106689 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 23:38:49.165533 8299 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 23:38:49.165553 8299 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 23:38:49.290520 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:38:49.400352 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 23:38:49.430835 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0923 23:38:49.432643 8299 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 23:38:49.432694 8299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 23:38:49.445661 8299 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 23:38:49.445737 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 23:38:49.518616 8299 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 23:38:49.518690 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 23:38:49.527776 8299 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 23:38:49.527839 8299 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 23:38:49.529964 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 23:38:49.530026 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 23:38:49.616201 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 23:38:49.638796 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 23:38:49.671613 8299 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 23:38:49.671636 8299 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 23:38:49.715629 8299 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 23:38:49.715652 8299 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 23:38:49.722766 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 23:38:49.734041 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 23:38:49.767564 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 23:38:49.901170 8299 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.84871086s)
I0923 23:38:49.901243 8299 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 23:38:49.901294 8299 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.97385331s)
I0923 23:38:49.901443 8299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 23:38:49.918159 8299 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 23:38:49.918186 8299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 23:38:49.922170 8299 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 23:38:49.922194 8299 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 23:38:49.931619 8299 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 23:38:49.931644 8299 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 23:38:49.974008 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 23:38:49.974051 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 23:38:50.051386 8299 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 23:38:50.051425 8299 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 23:38:50.123754 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 23:38:50.171735 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 23:38:50.216093 8299 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 23:38:50.216139 8299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 23:38:50.257349 8299 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 23:38:50.257373 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 23:38:50.325015 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 23:38:50.325042 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 23:38:50.403154 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 23:38:50.471279 8299 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 23:38:50.471353 8299 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 23:38:50.585030 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 23:38:50.585100 8299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 23:38:50.588865 8299 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 23:38:50.588926 8299 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 23:38:50.774152 8299 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 23:38:50.774238 8299 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 23:38:50.797484 8299 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 23:38:50.797547 8299 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 23:38:50.828440 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 23:38:50.828472 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 23:38:50.845501 8299 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 23:38:50.845537 8299 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 23:38:50.868120 8299 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:38:50.868145 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 23:38:50.888192 8299 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 23:38:50.888226 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 23:38:51.078966 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 23:38:51.096974 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:38:51.139224 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 23:38:51.139253 8299 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 23:38:51.700627 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 23:38:51.700649 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 23:38:52.298884 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 23:38:52.298907 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 23:38:53.177329 8299 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 23:38:53.177356 8299 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 23:38:53.687527 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.396919092s)
I0923 23:38:53.850295 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 23:38:54.516386 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.115957216s)
I0923 23:38:55.227435 8299 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 23:38:55.227541 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:55.262849 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:56.127796 8299 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 23:38:56.196143 8299 addons.go:234] Setting addon gcp-auth=true in "addons-093926"
I0923 23:38:56.196236 8299 host.go:66] Checking if "addons-093926" exists ...
I0923 23:38:56.196722 8299 cli_runner.go:164] Run: docker container inspect addons-093926 --format={{.State.Status}}
I0923 23:38:56.229472 8299 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 23:38:56.229542 8299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093926
I0923 23:38:56.254849 8299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-2224/.minikube/machines/addons-093926/id_rsa Username:docker}
I0923 23:38:58.494502 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.063588938s)
I0923 23:38:58.494537 8299 addons.go:475] Verifying addon ingress=true in "addons-093926"
I0923 23:38:58.494701 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.878467432s)
I0923 23:38:58.496458 8299 out.go:177] * Verifying ingress addon...
I0923 23:38:58.499024 8299 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0923 23:38:58.505770 8299 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0923 23:38:58.505807 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:38:59.010457 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:38:59.544609 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:00.028139 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:00.507547 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:00.714029 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.075183856s)
I0923 23:39:00.714086 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.991258988s)
I0923 23:39:00.714097 8299 addons.go:475] Verifying addon registry=true in "addons-093926"
I0923 23:39:00.714342 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.980235549s)
I0923 23:39:00.714433 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.946811182s)
I0923 23:39:00.714680 8299 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.81321823s)
I0923 23:39:00.714727 8299 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0923 23:39:00.715860 8299 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.814600801s)
I0923 23:39:00.716623 8299 out.go:177] * Verifying registry addon...
I0923 23:39:00.716815 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.593020766s)
I0923 23:39:00.716835 8299 addons.go:475] Verifying addon metrics-server=true in "addons-093926"
I0923 23:39:00.716863 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.545103195s)
I0923 23:39:00.717000 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.313819802s)
I0923 23:39:00.717195 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.638192175s)
I0923 23:39:00.717299 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.620294554s)
W0923 23:39:00.717323 8299 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 23:39:00.717343 8299 retry.go:31] will retry after 207.27618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 23:39:00.717637 8299 node_ready.go:35] waiting up to 6m0s for node "addons-093926" to be "Ready" ...
I0923 23:39:00.719484 8299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 23:39:00.719646 8299 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-093926 service yakd-dashboard -n yakd-dashboard
I0923 23:39:00.748610 8299 node_ready.go:49] node "addons-093926" has status "Ready":"True"
I0923 23:39:00.748633 8299 node_ready.go:38] duration metric: took 30.939497ms for node "addons-093926" to be "Ready" ...
I0923 23:39:00.748644 8299 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 23:39:00.761143 8299 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 23:39:00.761218 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W0923 23:39:00.835227 8299 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0923 23:39:00.868942 8299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mrwrc" in "kube-system" namespace to be "Ready" ...
I0923 23:39:00.925808 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:39:00.994508 8299 pod_ready.go:93] pod "coredns-7c65d6cfc9-mrwrc" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:00.994580 8299 pod_ready.go:82] duration metric: took 125.565319ms for pod "coredns-7c65d6cfc9-mrwrc" in "kube-system" namespace to be "Ready" ...
I0923 23:39:00.994620 8299 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace to be "Ready" ...
I0923 23:39:01.056694 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:01.219157 8299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093926" context rescaled to 1 replicas
I0923 23:39:01.227742 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:01.518803 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:01.578378 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.72798621s)
I0923 23:39:01.578428 8299 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093926"
I0923 23:39:01.578609 8299 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.349107082s)
I0923 23:39:01.581143 8299 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 23:39:01.581212 8299 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 23:39:01.584931 8299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 23:39:01.587097 8299 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 23:39:01.588692 8299 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 23:39:01.588722 8299 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 23:39:01.618224 8299 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 23:39:01.618253 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:01.723917 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:01.735319 8299 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 23:39:01.735393 8299 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 23:39:01.813161 8299 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 23:39:01.813234 8299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 23:39:01.867949 8299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 23:39:02.022286 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:02.113631 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:02.226070 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:02.505755 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:02.591156 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:02.724113 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:03.003013 8299 pod_ready.go:103] pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:03.005824 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:03.089815 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:03.224293 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:03.331639 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.405726607s)
I0923 23:39:03.451947 8299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.583950653s)
I0923 23:39:03.455918 8299 addons.go:475] Verifying addon gcp-auth=true in "addons-093926"
I0923 23:39:03.458336 8299 out.go:177] * Verifying gcp-auth addon...
I0923 23:39:03.461098 8299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 23:39:03.463997 8299 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 23:39:03.503140 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:03.590495 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:03.724818 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:04.002864 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:04.092251 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:04.224068 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:04.567235 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:04.590163 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:04.724057 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:05.008574 8299 pod_ready.go:103] pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:05.067125 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:05.090283 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:05.225113 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:05.503857 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:05.589029 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:05.724235 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:06.005905 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:06.091493 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:06.227734 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:06.503720 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:06.504274 8299 pod_ready.go:98] pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:06 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:38:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:38:49 +0000 UTC,FinishedAt:2024-09-23 23:39:06 +0000 UTC,ContainerID:docker://2cdbfd8d2dcfc08666133fda135226be90c1ccf84a0abccc08a021727adfb651,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://2cdbfd8d2dcfc08666133fda135226be90c1ccf84a0abccc08a021727adfb651 Started:0x4000131fc0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4000063150} {Name:kube-api-access-2p58r MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x4000063170}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0923 23:39:06.504300 8299 pod_ready.go:82] duration metric: took 5.50965424s for pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace to be "Ready" ...
E0923 23:39:06.504314 8299 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-rnm46" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:06 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:38:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:38:49 +0000 UTC,FinishedAt:2024-09-23 23:39:06 +0000 UTC,ContainerID:docker://2cdbfd8d2dcfc08666133fda135226be90c1ccf84a0abccc08a021727adfb651,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://2cdbfd8d2dcfc08666133fda135226be90c1ccf84a0abccc08a021727adfb651 Started:0x4000131fc0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4000063150} {Name:kube-api-access-2p58r MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4000063170}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0923 23:39:06.504327 8299 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.509860 8299 pod_ready.go:93] pod "etcd-addons-093926" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:06.509890 8299 pod_ready.go:82] duration metric: took 5.543666ms for pod "etcd-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.509902 8299 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.515389 8299 pod_ready.go:93] pod "kube-apiserver-addons-093926" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:06.515416 8299 pod_ready.go:82] duration metric: took 5.506095ms for pod "kube-apiserver-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.515429 8299 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.521120 8299 pod_ready.go:93] pod "kube-controller-manager-addons-093926" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:06.521144 8299 pod_ready.go:82] duration metric: took 5.705914ms for pod "kube-controller-manager-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.521157 8299 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c5bjm" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.526512 8299 pod_ready.go:93] pod "kube-proxy-c5bjm" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:06.526539 8299 pod_ready.go:82] duration metric: took 5.373254ms for pod "kube-proxy-c5bjm" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.526551 8299 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.590109 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:06.723151 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:06.900079 8299 pod_ready.go:93] pod "kube-scheduler-addons-093926" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:06.900105 8299 pod_ready.go:82] duration metric: took 373.545484ms for pod "kube-scheduler-addons-093926" in "kube-system" namespace to be "Ready" ...
I0923 23:39:06.900118 8299 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace to be "Ready" ...
I0923 23:39:07.005704 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:07.091830 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:07.223696 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:07.503441 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:07.590844 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:07.723742 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:08.004172 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:08.090428 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:08.223270 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:08.503297 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:08.589706 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:08.722861 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:08.906654 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:09.003650 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:09.091378 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:09.223929 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:09.503975 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:09.590683 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:09.723998 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:10.004574 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:10.091337 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:10.223690 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:10.571222 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:10.590053 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:10.723885 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:10.906934 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:11.006271 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:11.090579 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:11.224480 8299 kapi.go:107] duration metric: took 10.504986354s to wait for kubernetes.io/minikube-addons=registry ...
I0923 23:39:11.503995 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:11.590210 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:12.010635 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:12.091195 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:12.503563 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:12.590111 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:13.008544 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:13.089594 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:13.406476 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:13.503760 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:13.590766 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:14.004573 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:14.094065 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:14.503848 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:14.592895 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:15.023780 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:15.095285 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:15.503704 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:15.591054 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:15.906522 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:16.004106 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:16.091345 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:16.504478 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:16.591213 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:17.006890 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:17.091228 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:17.536697 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:17.590624 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:17.908058 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:18.004340 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:18.093998 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:18.566807 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:18.590799 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:19.004423 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:19.091907 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:19.504043 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:19.589982 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:20.004798 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:20.091876 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:20.407089 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:20.504503 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:20.590356 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:21.003634 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:21.090118 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:21.503664 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:21.589931 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:22.004806 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:22.090022 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:22.504012 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:22.589272 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:22.905748 8299 pod_ready.go:103] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:23.004695 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:23.090872 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:23.503112 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:23.589971 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:23.906574 8299 pod_ready.go:93] pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:23.906606 8299 pod_ready.go:82] duration metric: took 17.006480089s for pod "metrics-server-84c5f94fbc-njwv5" in "kube-system" namespace to be "Ready" ...
I0923 23:39:23.906618 8299 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fs8hv" in "kube-system" namespace to be "Ready" ...
I0923 23:39:23.912345 8299 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fs8hv" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:23.912378 8299 pod_ready.go:82] duration metric: took 5.752314ms for pod "nvidia-device-plugin-daemonset-fs8hv" in "kube-system" namespace to be "Ready" ...
I0923 23:39:23.912402 8299 pod_ready.go:39] duration metric: took 23.163745789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 23:39:23.912422 8299 api_server.go:52] waiting for apiserver process to appear ...
I0923 23:39:23.912492 8299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:23.927778 8299 api_server.go:72] duration metric: took 36.000294882s to wait for apiserver process to appear ...
I0923 23:39:23.927806 8299 api_server.go:88] waiting for apiserver healthz status ...
I0923 23:39:23.927827 8299 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0923 23:39:23.936810 8299 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0923 23:39:23.938420 8299 api_server.go:141] control plane version: v1.31.1
I0923 23:39:23.938484 8299 api_server.go:131] duration metric: took 10.670816ms to wait for apiserver health ...
I0923 23:39:23.938507 8299 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 23:39:23.948674 8299 system_pods.go:59] 17 kube-system pods found
I0923 23:39:23.948771 8299 system_pods.go:61] "coredns-7c65d6cfc9-mrwrc" [9d0d82e0-7dd3-4b44-85e5-89c562726648] Running
I0923 23:39:23.948798 8299 system_pods.go:61] "csi-hostpath-attacher-0" [a16f2837-d82a-4735-93e0-7a7791d1c09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 23:39:23.948840 8299 system_pods.go:61] "csi-hostpath-resizer-0" [25a24f21-3bbb-4bfb-b7c8-6f7235bdf616] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 23:39:23.948872 8299 system_pods.go:61] "csi-hostpathplugin-zhbmd" [c6392ac4-77d2-4cdf-b282-3fdac6623cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 23:39:23.948894 8299 system_pods.go:61] "etcd-addons-093926" [4f272f2a-41cf-4f2c-9660-95308888caf1] Running
I0923 23:39:23.948918 8299 system_pods.go:61] "kube-apiserver-addons-093926" [4b1a72e7-e4df-404a-8d94-28cde8e3e96d] Running
I0923 23:39:23.948950 8299 system_pods.go:61] "kube-controller-manager-addons-093926" [6dbd9d60-2b9e-4128-90f9-ae9940b39185] Running
I0923 23:39:23.948976 8299 system_pods.go:61] "kube-ingress-dns-minikube" [2603ac93-e4e2-4dcc-ba09-d44bcec7d27a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 23:39:23.948995 8299 system_pods.go:61] "kube-proxy-c5bjm" [a96f794a-dbde-41f9-9da3-fcc534029dc8] Running
I0923 23:39:23.949017 8299 system_pods.go:61] "kube-scheduler-addons-093926" [198fc8a3-e25f-4626-9600-7026c3588ccd] Running
I0923 23:39:23.949048 8299 system_pods.go:61] "metrics-server-84c5f94fbc-njwv5" [d5b63a80-40cb-4533-94a7-36f27dc1e030] Running
I0923 23:39:23.949080 8299 system_pods.go:61] "nvidia-device-plugin-daemonset-fs8hv" [36e9cbc4-45e9-4224-979a-63554be57c22] Running
I0923 23:39:23.949100 8299 system_pods.go:61] "registry-66c9cd494c-5nsr7" [489c11e2-9ffb-44d0-ab77-26a06d440d24] Running
I0923 23:39:23.949117 8299 system_pods.go:61] "registry-proxy-6hk9r" [7c97bbfc-93ec-48a1-aeb1-7e1f322373db] Running
I0923 23:39:23.949149 8299 system_pods.go:61] "snapshot-controller-56fcc65765-4rmlh" [22d88433-72a1-4052-8855-7c83ab3236fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:23.949174 8299 system_pods.go:61] "snapshot-controller-56fcc65765-r2265" [1d9522ae-13ab-4cd8-830e-c6d2c9291499] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:23.949195 8299 system_pods.go:61] "storage-provisioner" [6fa5136a-8fb2-44d3-bc72-c234901d477f] Running
I0923 23:39:23.949217 8299 system_pods.go:74] duration metric: took 10.692273ms to wait for pod list to return data ...
I0923 23:39:23.949247 8299 default_sa.go:34] waiting for default service account to be created ...
I0923 23:39:23.957561 8299 default_sa.go:45] found service account: "default"
I0923 23:39:23.957584 8299 default_sa.go:55] duration metric: took 8.314978ms for default service account to be created ...
I0923 23:39:23.957594 8299 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 23:39:23.967961 8299 system_pods.go:86] 17 kube-system pods found
I0923 23:39:23.968050 8299 system_pods.go:89] "coredns-7c65d6cfc9-mrwrc" [9d0d82e0-7dd3-4b44-85e5-89c562726648] Running
I0923 23:39:23.968076 8299 system_pods.go:89] "csi-hostpath-attacher-0" [a16f2837-d82a-4735-93e0-7a7791d1c09f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 23:39:23.968116 8299 system_pods.go:89] "csi-hostpath-resizer-0" [25a24f21-3bbb-4bfb-b7c8-6f7235bdf616] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 23:39:23.968142 8299 system_pods.go:89] "csi-hostpathplugin-zhbmd" [c6392ac4-77d2-4cdf-b282-3fdac6623cba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 23:39:23.968162 8299 system_pods.go:89] "etcd-addons-093926" [4f272f2a-41cf-4f2c-9660-95308888caf1] Running
I0923 23:39:23.968183 8299 system_pods.go:89] "kube-apiserver-addons-093926" [4b1a72e7-e4df-404a-8d94-28cde8e3e96d] Running
I0923 23:39:23.968216 8299 system_pods.go:89] "kube-controller-manager-addons-093926" [6dbd9d60-2b9e-4128-90f9-ae9940b39185] Running
I0923 23:39:23.968243 8299 system_pods.go:89] "kube-ingress-dns-minikube" [2603ac93-e4e2-4dcc-ba09-d44bcec7d27a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 23:39:23.968263 8299 system_pods.go:89] "kube-proxy-c5bjm" [a96f794a-dbde-41f9-9da3-fcc534029dc8] Running
I0923 23:39:23.968283 8299 system_pods.go:89] "kube-scheduler-addons-093926" [198fc8a3-e25f-4626-9600-7026c3588ccd] Running
I0923 23:39:23.968316 8299 system_pods.go:89] "metrics-server-84c5f94fbc-njwv5" [d5b63a80-40cb-4533-94a7-36f27dc1e030] Running
I0923 23:39:23.968338 8299 system_pods.go:89] "nvidia-device-plugin-daemonset-fs8hv" [36e9cbc4-45e9-4224-979a-63554be57c22] Running
I0923 23:39:23.968357 8299 system_pods.go:89] "registry-66c9cd494c-5nsr7" [489c11e2-9ffb-44d0-ab77-26a06d440d24] Running
I0923 23:39:23.968377 8299 system_pods.go:89] "registry-proxy-6hk9r" [7c97bbfc-93ec-48a1-aeb1-7e1f322373db] Running
I0923 23:39:23.968401 8299 system_pods.go:89] "snapshot-controller-56fcc65765-4rmlh" [22d88433-72a1-4052-8855-7c83ab3236fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:23.968437 8299 system_pods.go:89] "snapshot-controller-56fcc65765-r2265" [1d9522ae-13ab-4cd8-830e-c6d2c9291499] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:23.968457 8299 system_pods.go:89] "storage-provisioner" [6fa5136a-8fb2-44d3-bc72-c234901d477f] Running
I0923 23:39:23.968478 8299 system_pods.go:126] duration metric: took 10.878004ms to wait for k8s-apps to be running ...
I0923 23:39:23.968509 8299 system_svc.go:44] waiting for kubelet service to be running ....
I0923 23:39:23.968583 8299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0923 23:39:23.986068 8299 system_svc.go:56] duration metric: took 17.560984ms WaitForService to wait for kubelet
I0923 23:39:23.986147 8299 kubeadm.go:582] duration metric: took 36.058678135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 23:39:23.986181 8299 node_conditions.go:102] verifying NodePressure condition ...
I0923 23:39:23.989829 8299 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0923 23:39:23.989909 8299 node_conditions.go:123] node cpu capacity is 2
I0923 23:39:23.989936 8299 node_conditions.go:105] duration metric: took 3.721351ms to run NodePressure ...
I0923 23:39:23.989959 8299 start.go:241] waiting for startup goroutines ...
I0923 23:39:24.005245 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:24.090110 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:24.503379 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:24.589889 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:25.070339 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:25.091232 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:25.505137 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:25.594791 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:26.003596 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:26.090212 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:26.503594 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:26.589666 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:27.004725 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:27.090808 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:27.503921 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:27.589629 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:28.015361 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:28.095766 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:28.503718 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:28.590327 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:29.006562 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:29.090058 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:29.504663 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:29.590914 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:30.003725 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:30.097405 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:30.503494 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:30.590362 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:31.004338 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:31.090093 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:31.502915 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:31.591154 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:32.007831 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:32.090437 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:32.503362 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:32.594169 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:33.016326 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:33.092068 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:33.567000 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:33.589866 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:34.004777 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:34.090238 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:34.566114 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:34.589991 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:35.006236 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:35.089869 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:35.569622 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:35.590678 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:36.008588 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:36.090463 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:36.503289 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:36.589693 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:37.003749 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:37.090638 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:37.503405 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:37.596824 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:38.006877 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:38.089553 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:38.503650 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:38.590495 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:39.004392 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:39.090025 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:39.503852 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:39.589448 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:40.004123 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:40.090373 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:40.502963 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:40.589591 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:41.003771 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:41.090498 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:41.503764 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:41.592396 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:42.005491 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:42.091091 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:42.503554 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:42.590069 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:43.005051 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:43.094075 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:43.503214 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:43.591170 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:44.003681 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:44.090452 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:44.503349 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:44.590196 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:45.069522 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:45.112296 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:45.570449 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:45.590952 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:46.007314 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:46.090020 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:46.504127 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:46.600104 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:47.066703 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:47.090259 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:47.503902 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:47.589827 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:48.004835 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:48.089990 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:48.567354 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:48.590730 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:49.003291 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:49.090726 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:49.566504 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:49.590394 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:50.018775 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:50.090724 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:50.503934 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:50.589245 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:51.003716 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:51.090933 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:51.567314 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:51.589657 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:52.014194 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:52.090416 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:52.504281 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:52.591784 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:53.067280 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:53.089479 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:53.566185 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:53.589601 8299 kapi.go:107] duration metric: took 52.004670344s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 23:39:54.004238 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:54.505815 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:55.006922 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:55.509700 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:56.006114 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:56.503505 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:57.003266 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:57.503007 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:58.003220 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:58.503170 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:59.003512 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:39:59.503633 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:00.008665 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:00.515163 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:01.006970 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:01.503635 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:02.003413 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:02.504749 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:03.003071 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:03.504407 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:04.003596 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:04.503589 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:05.007284 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:05.503986 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:06.014873 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:06.568340 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:07.004065 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:07.504909 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:08.006530 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:08.504073 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:09.009520 8299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 23:40:09.503607 8299 kapi.go:107] duration metric: took 1m11.004580463s to wait for app.kubernetes.io/name=ingress-nginx ...
I0923 23:40:26.964487 8299 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 23:40:26.964514 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:27.465045 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:27.964575 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:28.465450 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:28.964466 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:29.464647 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:29.964569 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:30.468651 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:30.964690 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:31.464392 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:31.964692 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:32.464165 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:32.965673 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:33.464900 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:33.965000 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:34.465685 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:34.965527 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:35.464789 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:35.965527 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:36.464783 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:36.964449 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:37.464849 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:37.965255 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:38.465236 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:38.964745 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:39.465524 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:39.966302 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:40.467203 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:40.964456 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:41.465243 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:41.965229 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:42.464758 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:42.964077 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:43.464664 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:43.964450 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:44.464940 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:44.964218 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:45.465099 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:45.964463 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:46.464690 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:46.965950 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:47.464226 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:47.964864 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:48.464743 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:48.964527 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:49.465293 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:49.965126 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:50.464787 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:50.964001 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:51.464581 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:51.964982 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:52.464234 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:52.965012 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:53.464539 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:53.965640 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:54.465276 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:54.964555 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:55.464618 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:55.965505 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:56.464778 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:56.966057 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:57.464329 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:57.965314 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:58.465132 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:58.964711 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:59.465062 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:59.965512 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:00.464720 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:00.964422 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:01.464300 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:01.964921 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:02.464631 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:02.965194 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:03.464904 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:03.964729 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:04.464665 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:04.964196 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:05.465164 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:05.967826 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:06.475039 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:06.965047 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:07.465308 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:07.965119 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:08.465462 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:08.965052 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:09.464786 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:09.965036 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:10.464930 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:10.965497 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:11.464832 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:11.964615 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:12.464126 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:12.964458 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:13.464831 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:13.964842 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:14.464807 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:14.964418 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:15.464694 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:15.964718 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:16.464421 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:16.964330 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:17.465458 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:17.965321 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:18.465039 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:18.965298 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:19.465316 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:19.964873 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:20.465040 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:20.964240 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:21.464896 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:21.964470 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:22.465021 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:22.964373 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:23.464432 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:23.965261 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:24.465030 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:24.964289 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:25.465137 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:25.965101 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:26.464667 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:26.964345 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:27.466026 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:27.964702 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:28.464548 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:28.964612 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:29.464883 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:29.964361 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:30.465079 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:30.964558 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:31.464910 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:31.968040 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:32.464689 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:32.965444 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:33.464627 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:33.967516 8299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:41:34.465065 8299 kapi.go:107] duration metric: took 2m31.003966385s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0923 23:41:34.467143 8299 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093926 cluster.
I0923 23:41:34.469442 8299 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0923 23:41:34.471474 8299 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0923 23:41:34.473475 8299 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, volcano, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0923 23:41:34.475497 8299 addons.go:510] duration metric: took 2m46.547803067s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner volcano nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0923 23:41:34.475552 8299 start.go:246] waiting for cluster config update ...
I0923 23:41:34.475590 8299 start.go:255] writing updated cluster config ...
I0923 23:41:34.475905 8299 ssh_runner.go:195] Run: rm -f paused
I0923 23:41:34.838444 8299 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 23:41:34.841010 8299 out.go:177] * Done! kubectl is now configured to use "addons-093926" cluster and "default" namespace by default
==> Docker <==
Sep 23 23:51:23 addons-093926 dockerd[1285]: time="2024-09-23T23:51:23.520322121Z" level=info msg="ignoring event" container=5483d233c7e07b0cc50147b0eff368cd6037eb201358eb16470339525addcbc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.457377837Z" level=info msg="ignoring event" container=9fcb100acb0bd425471173e35aa18b28db9757f1eaa35c1c6ca86f12565f2873 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.485634905Z" level=info msg="ignoring event" container=f69a5da6438f83beda37c4720fbee0bf328bd8576a0c525e58d9e3c4d9c08bc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.513221768Z" level=info msg="ignoring event" container=0460873943d452d08097a95c1dc10e84a3d6cb642529db52d45a8fdc31131cbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.522534678Z" level=info msg="ignoring event" container=0b2ec1998fea5dc584011a6d754d90eb11bf9945717c56fddf26c33e6933afd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.530115766Z" level=info msg="ignoring event" container=250772aade6280a1d113187bd89e603eb3f31d023487695b44911d0ef163dbf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.532476389Z" level=info msg="ignoring event" container=f3c905961c94e082b9de45d7c869bf967020b18d3edd7db994c0b39fdb213739 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.574604898Z" level=info msg="ignoring event" container=cb52d9156b90f799a10bf53ae15e524ca4526d948d799bb6019439229209deb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.574688376Z" level=info msg="ignoring event" container=068c0354ea68f72fc1826f63383b8c5a374af8d45eeb8d368c5e675171b17d68 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.759224985Z" level=info msg="ignoring event" container=d80e12ab2e6b91290359d8870a22ab64233eb4dbe468bf4dde55b4c4dfce7243 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.802377114Z" level=info msg="ignoring event" container=118ac4dc42ee84f239219e7480e91229f04ec5c312d7f22f1bb5754383c44755 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:25 addons-093926 dockerd[1285]: time="2024-09-23T23:51:25.849269190Z" level=info msg="ignoring event" container=0b6073f25a0d7eaa296ae25210f9b1f8c533eb3596d63b14bae0602a18bf682b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:31 addons-093926 dockerd[1285]: time="2024-09-23T23:51:31.976671017Z" level=info msg="ignoring event" container=bcd2f3db7013aac94a6977718079fb17e2018c2158dc9a78e572b978b1b0aba3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:31 addons-093926 dockerd[1285]: time="2024-09-23T23:51:31.981857062Z" level=info msg="ignoring event" container=227d3101adb7875325cf76556d61d3eabe5e4d8707162c14568f5a93be6d82d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:32 addons-093926 dockerd[1285]: time="2024-09-23T23:51:32.141778520Z" level=info msg="ignoring event" container=d12f143d4bb71fc68bdaad42d71812b26264ef877efbaa25290536b94bbc9677 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:32 addons-093926 dockerd[1285]: time="2024-09-23T23:51:32.176118374Z" level=info msg="ignoring event" container=54d07a1bcf1c138de6ad2cdb11632e7a846e51438bed555fcc3b6f4617ce6898 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:33 addons-093926 dockerd[1285]: time="2024-09-23T23:51:33.096267054Z" level=info msg="ignoring event" container=2356037ab137250d5a13ff6eb3866239b882bf9ddfdc4b534c88e8c201f27353 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:33 addons-093926 cri-dockerd[1546]: time="2024-09-23T23:51:33Z" level=error msg="error getting RW layer size for container ID '227d3101adb7875325cf76556d61d3eabe5e4d8707162c14568f5a93be6d82d4': Error response from daemon: No such container: 227d3101adb7875325cf76556d61d3eabe5e4d8707162c14568f5a93be6d82d4"
Sep 23 23:51:33 addons-093926 cri-dockerd[1546]: time="2024-09-23T23:51:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '227d3101adb7875325cf76556d61d3eabe5e4d8707162c14568f5a93be6d82d4'"
Sep 23 23:51:33 addons-093926 cri-dockerd[1546]: time="2024-09-23T23:51:33Z" level=error msg="error getting RW layer size for container ID 'bcd2f3db7013aac94a6977718079fb17e2018c2158dc9a78e572b978b1b0aba3': Error response from daemon: No such container: bcd2f3db7013aac94a6977718079fb17e2018c2158dc9a78e572b978b1b0aba3"
Sep 23 23:51:33 addons-093926 cri-dockerd[1546]: time="2024-09-23T23:51:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bcd2f3db7013aac94a6977718079fb17e2018c2158dc9a78e572b978b1b0aba3'"
Sep 23 23:51:33 addons-093926 dockerd[1285]: time="2024-09-23T23:51:33.791258028Z" level=info msg="ignoring event" container=86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:33 addons-093926 dockerd[1285]: time="2024-09-23T23:51:33.913564215Z" level=info msg="ignoring event" container=bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:34 addons-093926 dockerd[1285]: time="2024-09-23T23:51:34.071220183Z" level=info msg="ignoring event" container=27bd436e887f60ec72f397b3e7bd0ec1db92a0be0e6b5401ccf9f9aeb975df27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:51:34 addons-093926 dockerd[1285]: time="2024-09-23T23:51:34.287010522Z" level=info msg="ignoring event" container=9d6ebbd097426170df7d4f686f3746b4900e049f9115a283ed6d96a67c4f0f7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1f08f7dd50ca9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 46 seconds ago Exited gadget 7 fbe5254c4aa79 gadget-9rq4t
c77804ddef1b2 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 4b12e04a2fd62 gcp-auth-89d5ffd79-h5xns
a2e04b87876a4 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 2ce58b6831f8c ingress-nginx-controller-bc57996ff-vnxmw
95c2bed06ea2d registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 12 minutes ago Exited patch 0 1babed3f912e6 ingress-nginx-admission-patch-f52px
35f4840850e04 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 12 minutes ago Exited create 0 c4e10a521dd1b ingress-nginx-admission-create-lvlmn
791a9e01a9d31 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 003baf263de8c local-path-provisioner-86d989889c-5pvqf
39aa243fe41e7 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 4788faab9d076 kube-ingress-dns-minikube
da8d426deee8c registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 a02ade8427f06 metrics-server-84c5f94fbc-njwv5
10d45e1ebcd67 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 241eb80712311 cloud-spanner-emulator-5b584cc74-8q482
57a13973166b0 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 1bbc87fb47396 nvidia-device-plugin-daemonset-fs8hv
81e1fca23006f ba04bb24b9575 12 minutes ago Running storage-provisioner 0 ef8de714d833f storage-provisioner
0f9d341c4b625 2f6c962e7b831 12 minutes ago Running coredns 0 7b9dbfe44243e coredns-7c65d6cfc9-mrwrc
3e2942e76f6ce 24a140c548c07 12 minutes ago Running kube-proxy 0 7e6aed9e41d92 kube-proxy-c5bjm
4122576e89bec d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 ee419a87c593b kube-apiserver-addons-093926
ed5bfb79c1cc5 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 df666ec2964dc kube-scheduler-addons-093926
6ad3a7a703ffb 27e3830e14027 12 minutes ago Running etcd 0 2303252f3cdc6 etcd-addons-093926
80c779bd8ff1f 279f381cb3736 12 minutes ago Running kube-controller-manager 0 7696aad85af41 kube-controller-manager-addons-093926
==> controller_ingress [a2e04b87876a] <==
NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
I0923 23:40:08.786612 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0923 23:40:09.056062 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0923 23:40:09.110459 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0923 23:40:09.132447 7 nginx.go:271] "Starting NGINX Ingress controller"
I0923 23:40:09.152652 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e7caab67-6d07-4ef6-b08e-1ab783986d01", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0923 23:40:09.154643 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"f50acb55-b129-4f0e-b7a1-1158d6e5f6b8", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0923 23:40:09.159490 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"970cf073-e156-4f8d-bd9a-1c1235defff5", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0923 23:40:10.334569 7 nginx.go:317] "Starting NGINX process"
I0923 23:40:10.335180 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0923 23:40:10.335585 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0923 23:40:10.336435 7 controller.go:193] "Configuration changes detected, backend reload required"
I0923 23:40:10.354997 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0923 23:40:10.355460 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-vnxmw"
I0923 23:40:10.366124 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-vnxmw" node="addons-093926"
I0923 23:40:10.387749 7 controller.go:213] "Backend successfully reloaded"
I0923 23:40:10.388055 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0923 23:40:10.388704 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vnxmw", UID:"cce0fd5c-9d0c-4246-870c-736c2a8c2417", APIVersion:"v1", ResourceVersion:"1268", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [0f9d341c4b62] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/arm64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
[INFO] Reloading complete
[INFO] 127.0.0.1:57795 - 54786 "HINFO IN 3261959489524646881.8927662167753423729. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041619315s
[INFO] 10.244.0.25:45512 - 26646 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000298084s
[INFO] 10.244.0.25:47755 - 41589 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221818s
[INFO] 10.244.0.25:46493 - 58544 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157661s
[INFO] 10.244.0.25:34297 - 16809 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108587s
[INFO] 10.244.0.25:42547 - 49887 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000220217s
[INFO] 10.244.0.25:54990 - 26324 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000255918s
[INFO] 10.244.0.25:45555 - 51003 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003974742s
[INFO] 10.244.0.25:51578 - 40906 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004566273s
[INFO] 10.244.0.25:45712 - 12425 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001897262s
[INFO] 10.244.0.25:40531 - 11280 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001477792s
==> describe nodes <==
Name: addons-093926
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-093926
kubernetes.io/os=linux
minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
minikube.k8s.io/name=addons-093926
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T23_38_43_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-093926
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 23:38:40 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-093926
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 23:51:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 23:47:23 +0000 Mon, 23 Sep 2024 23:38:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 23:47:23 +0000 Mon, 23 Sep 2024 23:38:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 23:47:23 +0000 Mon, 23 Sep 2024 23:38:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 23:47:23 +0000 Mon, 23 Sep 2024 23:38:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-093926
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 173746167c6244aca0631c7d55d57510
System UUID: c49f385c-6992-4dad-8900-472891f80485
Boot ID: 8d2d086a-088f-49b2-840d-19a6bad91fe6
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m18s
default cloud-spanner-emulator-5b584cc74-8q482 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-9rq4t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-h5xns 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-vnxmw 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-mrwrc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-093926 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-093926 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-093926 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-c5bjm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-093926 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-njwv5 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-fs8hv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-5pvqf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal NodeHasSufficientMemory 13m (x8 over 13m) kubelet Node addons-093926 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m (x7 over 13m) kubelet Node addons-093926 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m (x7 over 13m) kubelet Node addons-093926 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13m kubelet Updated Node Allocatable limit across pods
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-093926 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-093926 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-093926 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-093926 event: Registered Node addons-093926 in Controller
==> dmesg <==
[Sep23 23:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014949] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.411235] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.827369] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +5.765298] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [6ad3a7a703ff] <==
{"level":"info","ts":"2024-09-23T23:38:36.510483Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-23T23:38:36.517078Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-23T23:38:37.477443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-23T23:38:37.477671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-23T23:38:37.477769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-23T23:38:37.477893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-23T23:38:37.477976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T23:38:37.478101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-23T23:38:37.478180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T23:38:37.481530Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:37.487686Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-093926 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-23T23:38:37.488038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T23:38:37.489405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T23:38:37.490187Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T23:38:37.502345Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-23T23:38:37.502660Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:37.502846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:37.502988Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:37.493181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-23T23:38:37.506128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-23T23:38:37.493793Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T23:38:37.517158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-23T23:48:37.662138Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1881}
{"level":"info","ts":"2024-09-23T23:48:37.724846Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1881,"took":"62.125789ms","hash":874625979,"current-db-size-bytes":8847360,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4923392,"current-db-size-in-use":"4.9 MB"}
{"level":"info","ts":"2024-09-23T23:48:37.724906Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":874625979,"revision":1881,"compact-revision":-1}
==> gcp-auth [c77804ddef1b] <==
2024/09/23 23:41:33 GCP Auth Webhook started!
2024/09/23 23:41:52 Ready to marshal response ...
2024/09/23 23:41:52 Ready to write response ...
2024/09/23 23:41:53 Ready to marshal response ...
2024/09/23 23:41:53 Ready to write response ...
2024/09/23 23:42:17 Ready to marshal response ...
2024/09/23 23:42:17 Ready to write response ...
2024/09/23 23:42:17 Ready to marshal response ...
2024/09/23 23:42:17 Ready to write response ...
2024/09/23 23:42:17 Ready to marshal response ...
2024/09/23 23:42:17 Ready to write response ...
2024/09/23 23:50:32 Ready to marshal response ...
2024/09/23 23:50:32 Ready to write response ...
2024/09/23 23:50:46 Ready to marshal response ...
2024/09/23 23:50:46 Ready to write response ...
2024/09/23 23:51:16 Ready to marshal response ...
2024/09/23 23:51:16 Ready to write response ...
==> kernel <==
23:51:35 up 34 min, 0 users, load average: 0.68, 0.47, 0.43
Linux addons-093926 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [4122576e89be] <==
I0923 23:42:08.130246 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 23:42:08.183896 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 23:42:08.327587 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0923 23:42:08.406095 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0923 23:42:08.849181 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0923 23:42:08.883996 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0923 23:42:08.959366 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0923 23:42:09.009695 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0923 23:42:09.343210 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0923 23:42:09.520772 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0923 23:50:54.975279 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E0923 23:50:56.635252 1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
I0923 23:51:31.673532 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 23:51:31.673576 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 23:51:31.703538 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 23:51:31.703585 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 23:51:31.728370 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 23:51:31.728575 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 23:51:31.829028 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 23:51:31.829311 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0923 23:51:31.846509 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0923 23:51:31.846553 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0923 23:51:32.830449 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0923 23:51:32.847058 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0923 23:51:32.850339 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [80c779bd8ff1] <==
E0923 23:51:05.562474 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:18.609059 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:18.609101 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:23.632140 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:23.632179 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 23:51:25.263630 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
I0923 23:51:25.341269 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
I0923 23:51:25.622355 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093926"
W0923 23:51:26.069612 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:26.069656 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:26.906550 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:26.906595 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:27.051388 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:27.051430 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 23:51:31.875493 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="8.911µs"
E0923 23:51:32.832263 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0923 23:51:32.848879 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0923 23:51:32.851860 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 23:51:33.679883 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.753µs"
W0923 23:51:33.717769 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:33.717805 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:33.901636 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:33.901693 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:51:34.421960 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:51:34.422033 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [3e2942e76f6c] <==
I0923 23:38:49.136663 1 server_linux.go:66] "Using iptables proxy"
I0923 23:38:49.261995 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0923 23:38:49.262065 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 23:38:49.327160 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 23:38:49.327235 1 server_linux.go:169] "Using iptables Proxier"
I0923 23:38:49.329329 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 23:38:49.329661 1 server.go:483] "Version info" version="v1.31.1"
I0923 23:38:49.329674 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 23:38:49.339277 1 config.go:199] "Starting service config controller"
I0923 23:38:49.339312 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 23:38:49.339346 1 config.go:105] "Starting endpoint slice config controller"
I0923 23:38:49.339351 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 23:38:49.339363 1 config.go:328] "Starting node config controller"
I0923 23:38:49.339376 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 23:38:49.440190 1 shared_informer.go:320] Caches are synced for node config
I0923 23:38:49.440243 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0923 23:38:49.440278 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [ed5bfb79c1cc] <==
W0923 23:38:40.307484 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:40.308133 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:38:40.307696 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:40.308335 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:38:40.307766 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 23:38:40.308529 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:40.306100 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 23:38:40.308738 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 23:38:40.309485 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0923 23:38:40.309524 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.206345 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0923 23:38:41.206447 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.291709 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 23:38:41.291757 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.333335 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0923 23:38:41.333414 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.340335 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 23:38:41.340447 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.379765 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:41.379909 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.391947 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 23:38:41.392060 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:41.544515 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0923 23:38:41.544762 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0923 23:38:44.686334 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 23 23:51:32 addons-093926 kubelet[2345]: I0923 23:51:32.948659 2345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9522ae-13ab-4cd8-830e-c6d2c9291499" path="/var/lib/kubelet/pods/1d9522ae-13ab-4cd8-830e-c6d2c9291499/volumes"
Sep 23 23:51:32 addons-093926 kubelet[2345]: I0923 23:51:32.949074 2345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d88433-72a1-4052-8855-7c83ab3236fe" path="/var/lib/kubelet/pods/22d88433-72a1-4052-8855-7c83ab3236fe/volumes"
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.263838 2345 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-gcp-creds\") pod \"9e8b23a3-c6ac-4555-8b27-67631a9f9bef\" (UID: \"9e8b23a3-c6ac-4555-8b27-67631a9f9bef\") "
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.263897 2345 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df927\" (UniqueName: \"kubernetes.io/projected/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-kube-api-access-df927\") pod \"9e8b23a3-c6ac-4555-8b27-67631a9f9bef\" (UID: \"9e8b23a3-c6ac-4555-8b27-67631a9f9bef\") "
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.264313 2345 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9e8b23a3-c6ac-4555-8b27-67631a9f9bef" (UID: "9e8b23a3-c6ac-4555-8b27-67631a9f9bef"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.265942 2345 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-kube-api-access-df927" (OuterVolumeSpecName: "kube-api-access-df927") pod "9e8b23a3-c6ac-4555-8b27-67631a9f9bef" (UID: "9e8b23a3-c6ac-4555-8b27-67631a9f9bef"). InnerVolumeSpecName "kube-api-access-df927". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.364956 2345 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-gcp-creds\") on node \"addons-093926\" DevicePath \"\""
Sep 23 23:51:33 addons-093926 kubelet[2345]: I0923 23:51:33.364993 2345 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-df927\" (UniqueName: \"kubernetes.io/projected/9e8b23a3-c6ac-4555-8b27-67631a9f9bef-kube-api-access-df927\") on node \"addons-093926\" DevicePath \"\""
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.287760 2345 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mtv\" (UniqueName: \"kubernetes.io/projected/489c11e2-9ffb-44d0-ab77-26a06d440d24-kube-api-access-86mtv\") pod \"489c11e2-9ffb-44d0-ab77-26a06d440d24\" (UID: \"489c11e2-9ffb-44d0-ab77-26a06d440d24\") "
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.294059 2345 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489c11e2-9ffb-44d0-ab77-26a06d440d24-kube-api-access-86mtv" (OuterVolumeSpecName: "kube-api-access-86mtv") pod "489c11e2-9ffb-44d0-ab77-26a06d440d24" (UID: "489c11e2-9ffb-44d0-ab77-26a06d440d24"). InnerVolumeSpecName "kube-api-access-86mtv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.391521 2345 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-86mtv\" (UniqueName: \"kubernetes.io/projected/489c11e2-9ffb-44d0-ab77-26a06d440d24-kube-api-access-86mtv\") on node \"addons-093926\" DevicePath \"\""
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.492032 2345 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tz4n\" (UniqueName: \"kubernetes.io/projected/7c97bbfc-93ec-48a1-aeb1-7e1f322373db-kube-api-access-7tz4n\") pod \"7c97bbfc-93ec-48a1-aeb1-7e1f322373db\" (UID: \"7c97bbfc-93ec-48a1-aeb1-7e1f322373db\") "
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.494000 2345 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c97bbfc-93ec-48a1-aeb1-7e1f322373db-kube-api-access-7tz4n" (OuterVolumeSpecName: "kube-api-access-7tz4n") pod "7c97bbfc-93ec-48a1-aeb1-7e1f322373db" (UID: "7c97bbfc-93ec-48a1-aeb1-7e1f322373db"). InnerVolumeSpecName "kube-api-access-7tz4n". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.540634 2345 scope.go:117] "RemoveContainer" containerID="86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.592801 2345 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7tz4n\" (UniqueName: \"kubernetes.io/projected/7c97bbfc-93ec-48a1-aeb1-7e1f322373db-kube-api-access-7tz4n\") on node \"addons-093926\" DevicePath \"\""
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.601808 2345 scope.go:117] "RemoveContainer" containerID="86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa"
Sep 23 23:51:34 addons-093926 kubelet[2345]: E0923 23:51:34.610975 2345 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa" containerID="86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.611222 2345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa"} err="failed to get container status \"86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa\": rpc error: code = Unknown desc = Error response from daemon: No such container: 86b9658df5ce9099b912854f6d7a2f5979e43d2fd07dcd8504b349185c6f75fa"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.611331 2345 scope.go:117] "RemoveContainer" containerID="bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.650665 2345 scope.go:117] "RemoveContainer" containerID="bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556"
Sep 23 23:51:34 addons-093926 kubelet[2345]: E0923 23:51:34.652197 2345 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556" containerID="bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.652475 2345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556"} err="failed to get container status \"bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556\": rpc error: code = Unknown desc = Error response from daemon: No such container: bb99ea008b1ad70458472287e779ce061fe09b4cc1019f345b33005e68621556"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.948252 2345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489c11e2-9ffb-44d0-ab77-26a06d440d24" path="/var/lib/kubelet/pods/489c11e2-9ffb-44d0-ab77-26a06d440d24/volumes"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.948615 2345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c97bbfc-93ec-48a1-aeb1-7e1f322373db" path="/var/lib/kubelet/pods/7c97bbfc-93ec-48a1-aeb1-7e1f322373db/volumes"
Sep 23 23:51:34 addons-093926 kubelet[2345]: I0923 23:51:34.948949 2345 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8b23a3-c6ac-4555-8b27-67631a9f9bef" path="/var/lib/kubelet/pods/9e8b23a3-c6ac-4555-8b27-67631a9f9bef/volumes"
==> storage-provisioner [81e1fca23006] <==
I0923 23:38:54.850630 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 23:38:54.880711 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 23:38:54.880760 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 23:38:54.914145 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 23:38:54.914346 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093926_5ad6aa97-4bb2-4c5a-96cb-b01b70568c63!
I0923 23:38:54.922069 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e1ce8aa-4f99-49dd-b00e-6ae9ca5e3077", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093926_5ad6aa97-4bb2-4c5a-96cb-b01b70568c63 became leader
I0923 23:38:55.015475 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093926_5ad6aa97-4bb2-4c5a-96cb-b01b70568c63!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-093926 -n addons-093926
helpers_test.go:261: (dbg) Run: kubectl --context addons-093926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-lvlmn ingress-nginx-admission-patch-f52px
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-093926 describe pod busybox ingress-nginx-admission-create-lvlmn ingress-nginx-admission-patch-f52px
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093926 describe pod busybox ingress-nginx-admission-create-lvlmn ingress-nginx-admission-patch-f52px: exit status 1 (91.82717ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-093926/192.168.49.2
Start Time: Mon, 23 Sep 2024 23:42:17 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hrnnj (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hrnnj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m19s default-scheduler Successfully assigned default/busybox to addons-093926
Normal Pulling 7m49s (x4 over 9m19s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m48s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m48s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m38s (x6 over 9m18s) kubelet Error: ImagePullBackOff
Normal BackOff 4m12s (x21 over 9m18s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-lvlmn" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-f52px" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-093926 describe pod busybox ingress-nginx-admission-create-lvlmn ingress-nginx-admission-patch-f52px: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.72s)