=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 9.950688ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003613597s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00389476s
addons_test.go:338: (dbg) Run: kubectl --context addons-816293 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.158040303s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-arm64 -p addons-816293 ip
2024/09/23 13:23:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-arm64 -p addons-816293 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-816293
helpers_test.go:235: (dbg) docker inspect addons-816293:
-- stdout --
[
{
"Id": "2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc",
"Created": "2024-09-23T13:10:18.540481566Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 721430,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-23T13:10:18.678365482Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
"ResolvConfPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/hostname",
"HostsPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/hosts",
"LogPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc-json.log",
"Name": "/addons-816293",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-816293:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-816293",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a-init/diff:/var/lib/docker/overlay2/fce1ff641bd7a248af78be64b9f17f07383efee2fce882f3a641b971f5d14d46/diff",
"MergedDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/merged",
"UpperDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/diff",
"WorkDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-816293",
"Source": "/var/lib/docker/volumes/addons-816293/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-816293",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-816293",
"name.minikube.sigs.k8s.io": "addons-816293",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "13ba91a7a3b5a4284e99f463bc9345eac2887bfbbacb3d524406d1c75694d419",
"SandboxKey": "/var/run/docker/netns/13ba91a7a3b5",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33528"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33529"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33532"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33530"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33531"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-816293": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "52fe0ca6caebd72d934c58931acd354ed76e05ac5c97464262d266155e0634b4",
"EndpointID": "76474860bb3303c67977778a0a934a7357992a6a12034b30f5fd27b16c81e85d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-816293",
"2cb365819d99"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-816293 -n addons-816293
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-816293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 logs -n 25: (1.202549734s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | -p download-only-223839 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| delete | -p download-only-223839 | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| start | -o=json --download-only | download-only-136397 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | -p download-only-136397 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| delete | -p download-only-136397 | download-only-136397 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| delete | -p download-only-223839 | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| delete | -p download-only-136397 | download-only-136397 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| start | --download-only -p | download-docker-126922 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | download-docker-126922 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-126922 | download-docker-126922 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| start | --download-only -p | binary-mirror-953246 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | binary-mirror-953246 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:39347 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-953246 | binary-mirror-953246 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
| addons | disable dashboard -p | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | addons-816293 | | | | | |
| addons | enable dashboard -p | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | |
| | addons-816293 | | | | | |
| start | -p addons-816293 --wait=true | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:13 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-816293 addons disable | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:14 UTC | 23 Sep 24 13:14 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | enable headlamp | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
| | -p addons-816293 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-816293 addons disable | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-816293 addons disable | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
| | -p addons-816293 | | | | | |
| ssh | addons-816293 ssh cat | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
| | /opt/local-path-provisioner/pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4_default_test-pvc/file1 | | | | | |
| addons | addons-816293 addons disable | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-816293 ip | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
| addons | addons-816293 addons disable | addons-816293 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 13:09:53
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 13:09:53.885694 720939 out.go:345] Setting OutFile to fd 1 ...
I0923 13:09:53.885836 720939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:09:53.885847 720939 out.go:358] Setting ErrFile to fd 2...
I0923 13:09:53.885853 720939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:09:53.886116 720939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:09:53.886582 720939 out.go:352] Setting JSON to false
I0923 13:09:53.887428 720939 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10342,"bootTime":1727086652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0923 13:09:53.887509 720939 start.go:139] virtualization:
I0923 13:09:53.889712 720939 out.go:177] * [addons-816293] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0923 13:09:53.891804 720939 out.go:177] - MINIKUBE_LOCATION=19690
I0923 13:09:53.891976 720939 notify.go:220] Checking for updates...
I0923 13:09:53.895295 720939 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 13:09:53.897006 720939 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
I0923 13:09:53.898864 720939 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
I0923 13:09:53.900533 720939 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0923 13:09:53.902274 720939 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 13:09:53.904259 720939 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 13:09:53.933085 720939 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I0923 13:09:53.933224 720939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 13:09:53.990778 720939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:09:53.981675184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 13:09:53.990899 720939 docker.go:318] overlay module found
I0923 13:09:53.992830 720939 out.go:177] * Using the docker driver based on user configuration
I0923 13:09:53.994315 720939 start.go:297] selected driver: docker
I0923 13:09:53.994333 720939 start.go:901] validating driver "docker" against <nil>
I0923 13:09:53.994348 720939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 13:09:53.995010 720939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 13:09:54.052750 720939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:09:54.043031055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 13:09:54.053010 720939 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 13:09:54.053282 720939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 13:09:54.055174 720939 out.go:177] * Using Docker driver with root privileges
I0923 13:09:54.056860 720939 cni.go:84] Creating CNI manager for ""
I0923 13:09:54.057074 720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 13:09:54.057092 720939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0923 13:09:54.057205 720939 start.go:340] cluster config:
{Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 13:09:54.059348 720939 out.go:177] * Starting "addons-816293" primary control-plane node in "addons-816293" cluster
I0923 13:09:54.061046 720939 cache.go:121] Beginning downloading kic base image for docker with docker
I0923 13:09:54.062855 720939 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
I0923 13:09:54.064557 720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 13:09:54.064637 720939 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 13:09:54.064648 720939 cache.go:56] Caching tarball of preloaded images
I0923 13:09:54.064562 720939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
I0923 13:09:54.064751 720939 preload.go:172] Found /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 13:09:54.064762 720939 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 13:09:54.065261 720939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json ...
I0923 13:09:54.065300 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json: {Name:mkb1a0f55dddf93747091075b7c9989144106a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:09:54.080717 720939 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
I0923 13:09:54.080849 720939 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
I0923 13:09:54.080875 720939 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
I0923 13:09:54.080884 720939 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
I0923 13:09:54.080913 720939 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
I0923 13:09:54.080925 720939 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
I0923 13:10:11.612869 720939 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
I0923 13:10:11.612907 720939 cache.go:194] Successfully downloaded all kic artifacts
I0923 13:10:11.612957 720939 start.go:360] acquireMachinesLock for addons-816293: {Name:mkdca502684789b9579f34074a545d39dc0069d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 13:10:11.613679 720939 start.go:364] duration metric: took 692.309µs to acquireMachinesLock for "addons-816293"
I0923 13:10:11.613720 720939 start.go:93] Provisioning new machine with config: &{Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 13:10:11.613800 720939 start.go:125] createHost starting for "" (driver="docker")
I0923 13:10:11.617016 720939 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0923 13:10:11.617273 720939 start.go:159] libmachine.API.Create for "addons-816293" (driver="docker")
I0923 13:10:11.617310 720939 client.go:168] LocalClient.Create starting
I0923 13:10:11.617438 720939 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem
I0923 13:10:12.072567 720939 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem
I0923 13:10:12.560564 720939 cli_runner.go:164] Run: docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 13:10:12.575644 720939 cli_runner.go:211] docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 13:10:12.575733 720939 network_create.go:284] running [docker network inspect addons-816293] to gather additional debugging logs...
I0923 13:10:12.575757 720939 cli_runner.go:164] Run: docker network inspect addons-816293
W0923 13:10:12.590943 720939 cli_runner.go:211] docker network inspect addons-816293 returned with exit code 1
I0923 13:10:12.590975 720939 network_create.go:287] error running [docker network inspect addons-816293]: docker network inspect addons-816293: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-816293 not found
I0923 13:10:12.590996 720939 network_create.go:289] output of [docker network inspect addons-816293]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-816293 not found
** /stderr **
I0923 13:10:12.591095 720939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 13:10:12.606292 720939 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016fa9d0}
I0923 13:10:12.606335 720939 network_create.go:124] attempt to create docker network addons-816293 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 13:10:12.606393 720939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-816293 addons-816293
I0923 13:10:12.672321 720939 network_create.go:108] docker network addons-816293 192.168.49.0/24 created
I0923 13:10:12.672350 720939 kic.go:121] calculated static IP "192.168.49.2" for the "addons-816293" container
I0923 13:10:12.672435 720939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0923 13:10:12.687929 720939 cli_runner.go:164] Run: docker volume create addons-816293 --label name.minikube.sigs.k8s.io=addons-816293 --label created_by.minikube.sigs.k8s.io=true
I0923 13:10:12.705673 720939 oci.go:103] Successfully created a docker volume addons-816293
I0923 13:10:12.705766 720939 cli_runner.go:164] Run: docker run --rm --name addons-816293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --entrypoint /usr/bin/test -v addons-816293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
I0923 13:10:14.760929 720939 cli_runner.go:217] Completed: docker run --rm --name addons-816293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --entrypoint /usr/bin/test -v addons-816293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.055120426s)
I0923 13:10:14.760983 720939 oci.go:107] Successfully prepared a docker volume addons-816293
I0923 13:10:14.761002 720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 13:10:14.761022 720939 kic.go:194] Starting extracting preloaded images to volume ...
I0923 13:10:14.761086 720939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-816293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
I0923 13:10:18.470478 720939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-816293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.709347415s)
I0923 13:10:18.470512 720939 kic.go:203] duration metric: took 3.709486868s to extract preloaded images to volume ...
W0923 13:10:18.470651 720939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0923 13:10:18.470771 720939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0923 13:10:18.526038 720939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-816293 --name addons-816293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-816293 --network addons-816293 --ip 192.168.49.2 --volume addons-816293:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
I0923 13:10:18.862023 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Running}}
I0923 13:10:18.888667 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:18.911529 720939 cli_runner.go:164] Run: docker exec addons-816293 stat /var/lib/dpkg/alternatives/iptables
I0923 13:10:18.980838 720939 oci.go:144] the created container "addons-816293" has a running status.
I0923 13:10:18.980873 720939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa...
I0923 13:10:19.869841 720939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0923 13:10:19.896704 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:19.915359 720939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0923 13:10:19.915378 720939 kic_runner.go:114] Args: [docker exec --privileged addons-816293 chown docker:docker /home/docker/.ssh/authorized_keys]
I0923 13:10:19.985971 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:20.003533 720939 machine.go:93] provisionDockerMachine start ...
I0923 13:10:20.003651 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:20.034660 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:20.035077 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:20.035104 720939 main.go:141] libmachine: About to run SSH command:
hostname
I0923 13:10:20.168749 720939 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-816293
I0923 13:10:20.168790 720939 ubuntu.go:169] provisioning hostname "addons-816293"
I0923 13:10:20.168880 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:20.187171 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:20.187420 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:20.187439 720939 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-816293 && echo "addons-816293" | sudo tee /etc/hostname
I0923 13:10:20.333296 720939 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-816293
I0923 13:10:20.333437 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:20.350731 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:20.351003 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:20.351026 720939 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-816293' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-816293/g' /etc/hosts;
else
echo '127.0.1.1 addons-816293' | sudo tee -a /etc/hosts;
fi
fi
I0923 13:10:20.484996 720939 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0923 13:10:20.485030 720939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-714802/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-714802/.minikube}
I0923 13:10:20.485054 720939 ubuntu.go:177] setting up certificates
I0923 13:10:20.485065 720939 provision.go:84] configureAuth start
I0923 13:10:20.485126 720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
I0923 13:10:20.502106 720939 provision.go:143] copyHostCerts
I0923 13:10:20.502186 720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/ca.pem (1078 bytes)
I0923 13:10:20.502308 720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/cert.pem (1123 bytes)
I0923 13:10:20.502371 720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/key.pem (1675 bytes)
I0923 13:10:20.502421 720939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem org=jenkins.addons-816293 san=[127.0.0.1 192.168.49.2 addons-816293 localhost minikube]
I0923 13:10:20.936599 720939 provision.go:177] copyRemoteCerts
I0923 13:10:20.936671 720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0923 13:10:20.936722 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:20.952754 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:21.054230 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0923 13:10:21.080631 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0923 13:10:21.105849 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0923 13:10:21.131556 720939 provision.go:87] duration metric: took 646.475864ms to configureAuth
I0923 13:10:21.131584 720939 ubuntu.go:193] setting minikube options for container-runtime
I0923 13:10:21.131779 720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:10:21.131846 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:21.148230 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:21.148493 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:21.148515 720939 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0923 13:10:21.281722 720939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0923 13:10:21.281795 720939 ubuntu.go:71] root file system type: overlay
I0923 13:10:21.281922 720939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0923 13:10:21.281997 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:21.299391 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:21.299659 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:21.299746 720939 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0923 13:10:21.446255 720939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0923 13:10:21.446401 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:21.464616 720939 main.go:141] libmachine: Using SSH client type: native
I0923 13:10:21.464864 720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil> [] 0s} 127.0.0.1 33528 <nil> <nil>}
I0923 13:10:21.464882 720939 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0923 13:10:22.251146 720939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-19 14:24:16.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-23 13:10:21.440696641 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0923 13:10:22.251180 720939 machine.go:96] duration metric: took 2.24762702s to provisionDockerMachine
I0923 13:10:22.251192 720939 client.go:171] duration metric: took 10.633872078s to LocalClient.Create
I0923 13:10:22.251205 720939 start.go:167] duration metric: took 10.633933681s to libmachine.API.Create "addons-816293"
I0923 13:10:22.251213 720939 start.go:293] postStartSetup for "addons-816293" (driver="docker")
I0923 13:10:22.251224 720939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 13:10:22.251294 720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 13:10:22.251338 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:22.268312 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:22.363759 720939 ssh_runner.go:195] Run: cat /etc/os-release
I0923 13:10:22.367277 720939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 13:10:22.367316 720939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 13:10:22.367328 720939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 13:10:22.367335 720939 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0923 13:10:22.367345 720939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-714802/.minikube/addons for local assets ...
I0923 13:10:22.367418 720939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-714802/.minikube/files for local assets ...
I0923 13:10:22.367447 720939 start.go:296] duration metric: took 116.227236ms for postStartSetup
I0923 13:10:22.367760 720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
I0923 13:10:22.384260 720939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json ...
I0923 13:10:22.384542 720939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0923 13:10:22.384593 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:22.401121 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:22.493463 720939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0923 13:10:22.497713 720939 start.go:128] duration metric: took 10.883898048s to createHost
I0923 13:10:22.497740 720939 start.go:83] releasing machines lock for "addons-816293", held for 10.884042104s
I0923 13:10:22.497845 720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
I0923 13:10:22.514262 720939 ssh_runner.go:195] Run: cat /version.json
I0923 13:10:22.514315 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:22.514377 720939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0923 13:10:22.514459 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:22.532315 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:22.539857 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:22.624559 720939 ssh_runner.go:195] Run: systemctl --version
I0923 13:10:22.755452 720939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 13:10:22.759660 720939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0923 13:10:22.784001 720939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0923 13:10:22.784085 720939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 13:10:22.813434 720939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0923 13:10:22.813459 720939 start.go:495] detecting cgroup driver to use...
I0923 13:10:22.813496 720939 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 13:10:22.813596 720939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 13:10:22.830343 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 13:10:22.840146 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 13:10:22.850405 720939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 13:10:22.850475 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 13:10:22.860761 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 13:10:22.870877 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 13:10:22.881161 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 13:10:22.891331 720939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 13:10:22.900493 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 13:10:22.910332 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 13:10:22.919772 720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 13:10:22.929603 720939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 13:10:22.937959 720939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 13:10:22.946316 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:23.025588 720939 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0923 13:10:23.116474 720939 start.go:495] detecting cgroup driver to use...
I0923 13:10:23.116532 720939 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 13:10:23.116594 720939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0923 13:10:23.135477 720939 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0923 13:10:23.135557 720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0923 13:10:23.148835 720939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 13:10:23.167243 720939 ssh_runner.go:195] Run: which cri-dockerd
I0923 13:10:23.171873 720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0923 13:10:23.181919 720939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0923 13:10:23.202728 720939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0923 13:10:23.312562 720939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0923 13:10:23.413833 720939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0923 13:10:23.413994 720939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0923 13:10:23.434193 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:23.527544 720939 ssh_runner.go:195] Run: sudo systemctl restart docker
I0923 13:10:23.795685 720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0923 13:10:23.808571 720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 13:10:23.821252 720939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0923 13:10:23.906189 720939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0923 13:10:23.989692 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:24.085184 720939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0923 13:10:24.100608 720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 13:10:24.112996 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:24.202966 720939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0923 13:10:24.276610 720939 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0923 13:10:24.276701 720939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0923 13:10:24.280696 720939 start.go:563] Will wait 60s for crictl version
I0923 13:10:24.280764 720939 ssh_runner.go:195] Run: which crictl
I0923 13:10:24.284726 720939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0923 13:10:24.320025 720939 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.0
RuntimeApiVersion: v1
I0923 13:10:24.320096 720939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 13:10:24.343422 720939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0923 13:10:24.368709 720939 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
I0923 13:10:24.368810 720939 cli_runner.go:164] Run: docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 13:10:24.383883 720939 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0923 13:10:24.387438 720939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 13:10:24.397959 720939 kubeadm.go:883] updating cluster {Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 13:10:24.398077 720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 13:10:24.398147 720939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 13:10:24.415395 720939 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 13:10:24.415415 720939 docker.go:615] Images already preloaded, skipping extraction
I0923 13:10:24.415477 720939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0923 13:10:24.434191 720939 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0923 13:10:24.434217 720939 cache_images.go:84] Images are preloaded, skipping loading
I0923 13:10:24.434229 720939 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0923 13:10:24.434325 720939 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-816293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0923 13:10:24.434394 720939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0923 13:10:24.476644 720939 cni.go:84] Creating CNI manager for ""
I0923 13:10:24.476675 720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 13:10:24.476686 720939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 13:10:24.476706 720939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-816293 NodeName:addons-816293 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 13:10:24.476861 720939 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-816293"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 13:10:24.476935 720939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 13:10:24.486119 720939 binaries.go:44] Found k8s binaries, skipping transfer
I0923 13:10:24.486204 720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 13:10:24.495119 720939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0923 13:10:24.513069 720939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 13:10:24.530819 720939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0923 13:10:24.549004 720939 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0923 13:10:24.552427 720939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0923 13:10:24.562847 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:24.647764 720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 13:10:24.662329 720939 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293 for IP: 192.168.49.2
I0923 13:10:24.662362 720939 certs.go:194] generating shared ca certs ...
I0923 13:10:24.662378 720939 certs.go:226] acquiring lock for ca certs: {Name:mk527b93d9674c57825754d278442fd54dec1acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:24.662589 720939 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key
I0923 13:10:25.187270 720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt ...
I0923 13:10:25.187303 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt: {Name:mk59fe7ff27825d0b3e1b83df770cf8e994653de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:25.187576 720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key ...
I0923 13:10:25.187594 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key: {Name:mk5ab979687a32aa82781efad074c75a1a3ef4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:25.187717 720939 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key
I0923 13:10:25.724045 720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt ...
I0923 13:10:25.724075 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt: {Name:mk61a2cf52e2f511aa7a57cfe7b8f0edcef0198a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:25.724272 720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key ...
I0923 13:10:25.724287 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key: {Name:mk20e6309a58973809fa54cdff3588c828d810ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:25.724897 720939 certs.go:256] generating profile certs ...
I0923 13:10:25.725018 720939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key
I0923 13:10:25.725042 720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt with IP's: []
I0923 13:10:26.553938 720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt ...
I0923 13:10:26.553979 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: {Name:mkd3101b334a8f1113e6f94ea9272ae499d0bd02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:26.554179 720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key ...
I0923 13:10:26.554192 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key: {Name:mk7d8f718c134d5f973f5e94b9ebc740f2282c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:26.554273 720939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5
I0923 13:10:26.554297 720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0923 13:10:26.862110 720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 ...
I0923 13:10:26.862136 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5: {Name:mk95eba5a1add8e5a1494e5cfa31b736f2af1bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:26.862302 720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5 ...
I0923 13:10:26.862310 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5: {Name:mk54f663391840dea7d94377a1d53565a226a2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:26.862381 720939 certs.go:381] copying /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 -> /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt
I0923 13:10:26.862457 720939 certs.go:385] copying /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5 -> /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key
I0923 13:10:26.862504 720939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key
I0923 13:10:26.862518 720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt with IP's: []
I0923 13:10:27.145865 720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt ...
I0923 13:10:27.145917 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt: {Name:mkc7dfe1f891e2289ffd7a80fb069c53bd37ea36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:27.146111 720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key ...
I0923 13:10:27.146126 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key: {Name:mkde5093e6a75d02720ab44a10ee057d2ec0c779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:27.146320 720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem (1679 bytes)
I0923 13:10:27.146367 720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem (1078 bytes)
I0923 13:10:27.146401 720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem (1123 bytes)
I0923 13:10:27.146426 720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem (1675 bytes)
I0923 13:10:27.147075 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 13:10:27.173602 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 13:10:27.199598 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 13:10:27.224753 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0923 13:10:27.249401 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0923 13:10:27.274205 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0923 13:10:27.299033 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 13:10:27.323528 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0923 13:10:27.347685 720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 13:10:27.372613 720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 13:10:27.391037 720939 ssh_runner.go:195] Run: openssl version
I0923 13:10:27.396578 720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 13:10:27.406529 720939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 13:10:27.409995 720939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:10 /usr/share/ca-certificates/minikubeCA.pem
I0923 13:10:27.410070 720939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 13:10:27.417098 720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 13:10:27.426571 720939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 13:10:27.429939 720939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 13:10:27.430026 720939 kubeadm.go:392] StartCluster: {Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 13:10:27.430167 720939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0923 13:10:27.447994 720939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 13:10:27.457233 720939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 13:10:27.466532 720939 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0923 13:10:27.466599 720939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 13:10:27.475913 720939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 13:10:27.475935 720939 kubeadm.go:157] found existing configuration files:
I0923 13:10:27.476014 720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 13:10:27.484737 720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 13:10:27.484832 720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 13:10:27.493780 720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 13:10:27.503122 720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 13:10:27.503193 720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 13:10:27.511851 720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 13:10:27.520888 720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 13:10:27.520996 720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 13:10:27.529507 720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 13:10:27.538462 720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 13:10:27.538535 720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 13:10:27.547634 720939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0923 13:10:27.591608 720939 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 13:10:27.591908 720939 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 13:10:27.623613 720939 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0923 13:10:27.623775 720939 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0923 13:10:27.623842 720939 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0923 13:10:27.623918 720939 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0923 13:10:27.624005 720939 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0923 13:10:27.624087 720939 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0923 13:10:27.624173 720939 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0923 13:10:27.624257 720939 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0923 13:10:27.624341 720939 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0923 13:10:27.624422 720939 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0923 13:10:27.624505 720939 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0923 13:10:27.624587 720939 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0923 13:10:27.702832 720939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 13:10:27.703016 720939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 13:10:27.703151 720939 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 13:10:27.715274 720939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 13:10:27.719247 720939 out.go:235] - Generating certificates and keys ...
I0923 13:10:27.719484 720939 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 13:10:27.719602 720939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 13:10:28.382662 720939 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 13:10:28.733735 720939 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 13:10:29.634765 720939 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 13:10:29.914615 720939 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 13:10:30.523225 720939 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 13:10:30.523593 720939 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-816293 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 13:10:30.701156 720939 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 13:10:30.701511 720939 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-816293 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0923 13:10:30.980320 720939 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 13:10:31.397219 720939 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 13:10:31.692868 720939 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 13:10:31.693130 720939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 13:10:32.304993 720939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 13:10:33.410248 720939 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 13:10:34.376376 720939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 13:10:34.986171 720939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 13:10:35.429310 720939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 13:10:35.430051 720939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 13:10:35.433006 720939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 13:10:35.435260 720939 out.go:235] - Booting up control plane ...
I0923 13:10:35.435373 720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 13:10:35.435449 720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 13:10:35.436168 720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 13:10:35.448832 720939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 13:10:35.455814 720939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 13:10:35.455874 720939 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 13:10:35.562845 720939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 13:10:35.562990 720939 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 13:10:36.563769 720939 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000909154s
I0923 13:10:36.563857 720939 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 13:10:42.565732 720939 kubeadm.go:310] [api-check] The API server is healthy after 6.001922516s
I0923 13:10:42.590107 720939 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 13:10:42.608798 720939 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 13:10:42.630773 720939 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 13:10:42.630978 720939 kubeadm.go:310] [mark-control-plane] Marking the node addons-816293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 13:10:42.643037 720939 kubeadm.go:310] [bootstrap-token] Using token: jue7t8.ifonfcbdzs91nmi7
I0923 13:10:42.645240 720939 out.go:235] - Configuring RBAC rules ...
I0923 13:10:42.645378 720939 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 13:10:42.651732 720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 13:10:42.659630 720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 13:10:42.663553 720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 13:10:42.667443 720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 13:10:42.671027 720939 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 13:10:42.976818 720939 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 13:10:43.403104 720939 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 13:10:43.978120 720939 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 13:10:43.979217 720939 kubeadm.go:310]
I0923 13:10:43.979307 720939 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 13:10:43.979320 720939 kubeadm.go:310]
I0923 13:10:43.979396 720939 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 13:10:43.979405 720939 kubeadm.go:310]
I0923 13:10:43.979430 720939 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 13:10:43.979492 720939 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 13:10:43.979549 720939 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 13:10:43.979566 720939 kubeadm.go:310]
I0923 13:10:43.979635 720939 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 13:10:43.979647 720939 kubeadm.go:310]
I0923 13:10:43.979695 720939 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 13:10:43.979702 720939 kubeadm.go:310]
I0923 13:10:43.979754 720939 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 13:10:43.979835 720939 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 13:10:43.979910 720939 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 13:10:43.979919 720939 kubeadm.go:310]
I0923 13:10:43.980009 720939 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 13:10:43.980091 720939 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 13:10:43.980099 720939 kubeadm.go:310]
I0923 13:10:43.980196 720939 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jue7t8.ifonfcbdzs91nmi7 \
I0923 13:10:43.980313 720939 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:55443aee913122bbe6356d8284b0f4f2215d82633d1715094eaa306e6aa2be51 \
I0923 13:10:43.980343 720939 kubeadm.go:310] --control-plane
I0923 13:10:43.980350 720939 kubeadm.go:310]
I0923 13:10:43.980433 720939 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 13:10:43.980443 720939 kubeadm.go:310]
I0923 13:10:43.980527 720939 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jue7t8.ifonfcbdzs91nmi7 \
I0923 13:10:43.980643 720939 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:55443aee913122bbe6356d8284b0f4f2215d82633d1715094eaa306e6aa2be51
I0923 13:10:43.984850 720939 kubeadm.go:310] W0923 13:10:27.587369 1812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 13:10:43.985167 720939 kubeadm.go:310] W0923 13:10:27.588805 1812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 13:10:43.985385 720939 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0923 13:10:43.985490 720939 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 13:10:43.985509 720939 cni.go:84] Creating CNI manager for ""
I0923 13:10:43.985524 720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 13:10:43.989382 720939 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0923 13:10:43.991510 720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0923 13:10:44.000322 720939 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0923 13:10:44.027941 720939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 13:10:44.028087 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:44.028184 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-816293 minikube.k8s.io/updated_at=2024_09_23T13_10_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-816293 minikube.k8s.io/primary=true
I0923 13:10:44.181663 720939 ops.go:34] apiserver oom_adj: -16
I0923 13:10:44.181818 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:44.682249 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:45.182648 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:45.682511 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:46.182693 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:46.681862 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:47.182820 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:47.682454 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:48.182220 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:48.681899 720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 13:10:48.812954 720939 kubeadm.go:1113] duration metric: took 4.784905874s to wait for elevateKubeSystemPrivileges
I0923 13:10:48.812992 720939 kubeadm.go:394] duration metric: took 21.383000135s to StartCluster
I0923 13:10:48.813009 720939 settings.go:142] acquiring lock: {Name:mke1d97646bb6c4928996b4a93e7bcff38158bd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:48.813113 720939 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19690-714802/kubeconfig
I0923 13:10:48.813480 720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/kubeconfig: {Name:mk0b3bc0004539087df2d1e8d84176d4090fd8e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 13:10:48.813677 720939 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 13:10:48.813781 720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 13:10:48.814021 720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:10:48.814056 720939 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 13:10:48.814131 720939 addons.go:69] Setting yakd=true in profile "addons-816293"
I0923 13:10:48.814149 720939 addons.go:234] Setting addon yakd=true in "addons-816293"
I0923 13:10:48.814171 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.814664 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.815129 720939 addons.go:69] Setting cloud-spanner=true in profile "addons-816293"
I0923 13:10:48.815136 720939 addons.go:69] Setting metrics-server=true in profile "addons-816293"
I0923 13:10:48.815149 720939 addons.go:234] Setting addon cloud-spanner=true in "addons-816293"
I0923 13:10:48.815158 720939 addons.go:234] Setting addon metrics-server=true in "addons-816293"
I0923 13:10:48.815173 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.815183 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.815588 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.815611 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.816044 720939 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-816293"
I0923 13:10:48.816069 720939 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-816293"
I0923 13:10:48.816096 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.816520 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.819646 720939 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-816293"
I0923 13:10:48.819725 720939 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-816293"
I0923 13:10:48.819756 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.820209 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.829068 720939 addons.go:69] Setting registry=true in profile "addons-816293"
I0923 13:10:48.829109 720939 addons.go:234] Setting addon registry=true in "addons-816293"
I0923 13:10:48.829151 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.829649 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.831694 720939 addons.go:69] Setting default-storageclass=true in profile "addons-816293"
I0923 13:10:48.831731 720939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-816293"
I0923 13:10:48.832062 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.832216 720939 addons.go:69] Setting storage-provisioner=true in profile "addons-816293"
I0923 13:10:48.832231 720939 addons.go:234] Setting addon storage-provisioner=true in "addons-816293"
I0923 13:10:48.832258 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.832731 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.849046 720939 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-816293"
I0923 13:10:48.849086 720939 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-816293"
I0923 13:10:48.849431 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.855572 720939 addons.go:69] Setting gcp-auth=true in profile "addons-816293"
I0923 13:10:48.855609 720939 mustload.go:65] Loading cluster: addons-816293
I0923 13:10:48.855802 720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:10:48.856054 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.870510 720939 addons.go:69] Setting volcano=true in profile "addons-816293"
I0923 13:10:48.870553 720939 addons.go:234] Setting addon volcano=true in "addons-816293"
I0923 13:10:48.870594 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.871070 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.871435 720939 addons.go:69] Setting ingress=true in profile "addons-816293"
I0923 13:10:48.871456 720939 addons.go:234] Setting addon ingress=true in "addons-816293"
I0923 13:10:48.871492 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.871909 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.892853 720939 addons.go:69] Setting volumesnapshots=true in profile "addons-816293"
I0923 13:10:48.892899 720939 addons.go:234] Setting addon volumesnapshots=true in "addons-816293"
I0923 13:10:48.893094 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.893612 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.896683 720939 addons.go:69] Setting ingress-dns=true in profile "addons-816293"
I0923 13:10:48.896715 720939 addons.go:234] Setting addon ingress-dns=true in "addons-816293"
I0923 13:10:48.896758 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.898795 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.914428 720939 out.go:177] * Verifying Kubernetes components...
I0923 13:10:48.917327 720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0923 13:10:48.918529 720939 addons.go:69] Setting inspektor-gadget=true in profile "addons-816293"
I0923 13:10:48.918566 720939 addons.go:234] Setting addon inspektor-gadget=true in "addons-816293"
I0923 13:10:48.918604 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:48.919100 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:48.951550 720939 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 13:10:48.966366 720939 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 13:10:48.966391 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 13:10:48.966458 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:48.981708 720939 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 13:10:48.983727 720939 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 13:10:48.983754 720939 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 13:10:48.983841 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.004509 720939 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 13:10:49.005557 720939 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 13:10:49.031045 720939 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 13:10:49.031078 720939 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 13:10:49.031166 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.067402 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 13:10:49.069178 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 13:10:49.071383 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 13:10:49.073399 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 13:10:49.077055 720939 out.go:177] - Using image docker.io/registry:2.8.3
I0923 13:10:49.029103 720939 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 13:10:49.079685 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 13:10:49.079772 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.099134 720939 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 13:10:49.101341 720939 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 13:10:49.101410 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 13:10:49.101515 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.117649 720939 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 13:10:49.122069 720939 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 13:10:49.126955 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 13:10:49.132915 720939 addons.go:234] Setting addon default-storageclass=true in "addons-816293"
I0923 13:10:49.137304 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:49.137800 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:49.137975 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:49.143680 720939 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-816293"
I0923 13:10:49.143725 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:49.144158 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:49.163107 720939 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0923 13:10:49.164782 720939 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 13:10:49.173087 720939 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0923 13:10:49.175363 720939 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0923 13:10:49.183029 720939 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 13:10:49.183067 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0923 13:10:49.183138 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.183561 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 13:10:49.183575 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 13:10:49.183626 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.209504 720939 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 13:10:49.209828 720939 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0923 13:10:49.213933 720939 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 13:10:49.214160 720939 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 13:10:49.214176 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0923 13:10:49.214241 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.223341 720939 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0923 13:10:49.230023 720939 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 13:10:49.230250 720939 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 13:10:49.230487 720939 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0923 13:10:49.230517 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0923 13:10:49.230607 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.234828 720939 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 13:10:49.234854 720939 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 13:10:49.235567 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.237868 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 13:10:49.237936 720939 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 13:10:49.238030 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.272389 720939 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 13:10:49.274551 720939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 13:10:49.274573 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 13:10:49.274633 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.294191 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.296688 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.310657 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.337707 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.338331 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.423616 720939 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 13:10:49.423637 720939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 13:10:49.423710 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.425436 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.427483 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.428691 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.447388 720939 out.go:177] - Using image docker.io/busybox:stable
I0923 13:10:49.449688 720939 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 13:10:49.452863 720939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 13:10:49.452887 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 13:10:49.452971 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:49.457325 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.462385 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.469218 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.469867 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.505080 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:49.508343 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
W0923 13:10:49.511403 720939 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0923 13:10:49.511432 720939 retry.go:31] will retry after 297.619035ms: ssh: handshake failed: EOF
I0923 13:10:50.051540 720939 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.134169766s)
I0923 13:10:50.051624 720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0923 13:10:50.051686 720939 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.237885277s)
I0923 13:10:50.051826 720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 13:10:50.055739 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 13:10:50.102856 720939 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 13:10:50.102888 720939 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 13:10:50.114224 720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 13:10:50.114251 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 13:10:50.265396 720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 13:10:50.265419 720939 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 13:10:50.280469 720939 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 13:10:50.280537 720939 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 13:10:50.286519 720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 13:10:50.286585 720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 13:10:50.308322 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 13:10:50.329281 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 13:10:50.392095 720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 13:10:50.392163 720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 13:10:50.438032 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0923 13:10:50.458750 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 13:10:50.495917 720939 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 13:10:50.495997 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 13:10:50.500665 720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 13:10:50.500743 720939 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 13:10:50.504668 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 13:10:50.532388 720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 13:10:50.532461 720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 13:10:50.562236 720939 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 13:10:50.562309 720939 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 13:10:50.573925 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 13:10:50.573999 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 13:10:50.575381 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0923 13:10:50.603740 720939 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 13:10:50.603768 720939 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 13:10:50.622327 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 13:10:50.631113 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 13:10:50.713379 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 13:10:50.768784 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 13:10:50.768864 720939 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 13:10:50.773988 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 13:10:50.774069 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 13:10:50.775273 720939 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 13:10:50.775375 720939 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 13:10:50.815549 720939 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 13:10:50.815641 720939 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 13:10:51.120037 720939 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 13:10:51.120120 720939 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 13:10:51.124027 720939 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 13:10:51.124055 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 13:10:51.128930 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 13:10:51.128972 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 13:10:51.249958 720939 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 13:10:51.249984 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 13:10:51.417480 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 13:10:51.448225 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 13:10:51.448255 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 13:10:51.500479 720939 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 13:10:51.500506 720939 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 13:10:51.573052 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 13:10:51.762571 720939 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 13:10:51.762605 720939 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 13:10:51.783820 720939 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 13:10:51.783845 720939 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 13:10:51.803694 720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 13:10:51.803723 720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 13:10:51.842288 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 13:10:51.842318 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 13:10:51.988668 720939 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 13:10:51.988692 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 13:10:52.087480 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 13:10:52.087507 720939 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 13:10:52.213277 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 13:10:52.444987 720939 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.39333283s)
I0923 13:10:52.445871 720939 node_ready.go:35] waiting up to 6m0s for node "addons-816293" to be "Ready" ...
I0923 13:10:52.447067 720939 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.395192038s)
I0923 13:10:52.447094 720939 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0923 13:10:52.455977 720939 node_ready.go:49] node "addons-816293" has status "Ready":"True"
I0923 13:10:52.456004 720939 node_ready.go:38] duration metric: took 10.090688ms for node "addons-816293" to be "Ready" ...
I0923 13:10:52.456014 720939 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 13:10:52.471972 720939 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace to be "Ready" ...
I0923 13:10:52.498869 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 13:10:52.498895 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 13:10:52.950428 720939 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-816293" context rescaled to 1 replicas
I0923 13:10:52.988785 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 13:10:52.988808 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 13:10:53.788792 720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 13:10:53.788819 720939 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 13:10:54.206674 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 13:10:54.479692 720939 pod_ready.go:103] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status "Ready":"False"
I0923 13:10:55.548647 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.492862021s)
I0923 13:10:56.150328 720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 13:10:56.150416 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:56.183676 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:56.483692 720939 pod_ready.go:103] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status "Ready":"False"
I0923 13:10:57.014232 720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 13:10:57.147544 720939 addons.go:234] Setting addon gcp-auth=true in "addons-816293"
I0923 13:10:57.147654 720939 host.go:66] Checking if "addons-816293" exists ...
I0923 13:10:57.148221 720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
I0923 13:10:57.175729 720939 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 13:10:57.175784 720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
I0923 13:10:57.212830 720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
I0923 13:10:58.985326 720939 pod_ready.go:98] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 13:10:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 13:10:50 +0000 UTC,FinishedAt:2024-09-23 13:10:58 +0000 UTC,ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467 Started:0x4001c95610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001ccc170} {Name:kube-api-access-b4dm4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x4001ccc180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0923 13:10:58.985542 720939 pod_ready.go:82] duration metric: took 6.513524778s for pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace to be "Ready" ...
E0923 13:10:58.985574 720939 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 13:10:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 13:10:50 +0000 UTC,FinishedAt:2024-09-23 13:10:58 +0000 UTC,ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467 Started:0x4001c95610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001ccc170} {Name:kube-api-access-b4dm4 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001ccc180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0923 13:10:58.985629 720939 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.018588 720939 pod_ready.go:93] pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.018616 720939 pod_ready.go:82] duration metric: took 32.962102ms for pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.018628 720939 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.081556 720939 pod_ready.go:93] pod "etcd-addons-816293" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.081631 720939 pod_ready.go:82] duration metric: took 62.993213ms for pod "etcd-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.081660 720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.110977 720939 pod_ready.go:93] pod "kube-apiserver-addons-816293" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.111041 720939 pod_ready.go:82] duration metric: took 29.3602ms for pod "kube-apiserver-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.111075 720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.136659 720939 pod_ready.go:93] pod "kube-controller-manager-addons-816293" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.136682 720939 pod_ready.go:82] duration metric: took 25.586055ms for pod "kube-controller-manager-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.136694 720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwjn5" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.384165 720939 pod_ready.go:93] pod "kube-proxy-gwjn5" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.384236 720939 pod_ready.go:82] duration metric: took 247.533229ms for pod "kube-proxy-gwjn5" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.384262 720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.856482 720939 pod_ready.go:93] pod "kube-scheduler-addons-816293" in "kube-system" namespace has status "Ready":"True"
I0923 13:10:59.856555 720939 pod_ready.go:82] duration metric: took 472.270779ms for pod "kube-scheduler-addons-816293" in "kube-system" namespace to be "Ready" ...
I0923 13:10:59.856590 720939 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace to be "Ready" ...
I0923 13:11:01.290819 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.982459357s)
I0923 13:11:01.290820 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.961506877s)
I0923 13:11:01.291014 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.852901161s)
I0923 13:11:01.291550 720939 addons.go:475] Verifying addon ingress=true in "addons-816293"
I0923 13:11:01.291071 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.832295444s)
I0923 13:11:01.291127 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.786363306s)
I0923 13:11:01.291165 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.715725863s)
I0923 13:11:01.291184 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.668780146s)
I0923 13:11:01.291255 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.660067972s)
I0923 13:11:01.292061 720939 addons.go:475] Verifying addon metrics-server=true in "addons-816293"
I0923 13:11:01.291278 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.577824294s)
I0923 13:11:01.292097 720939 addons.go:475] Verifying addon registry=true in "addons-816293"
I0923 13:11:01.291352 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.873843059s)
W0923 13:11:01.292356 720939 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 13:11:01.292382 720939 retry.go:31] will retry after 249.364787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 13:11:01.291381 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.718303252s)
I0923 13:11:01.291433 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.078128989s)
I0923 13:11:01.295167 720939 out.go:177] * Verifying ingress addon...
I0923 13:11:01.298271 720939 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-816293 service yakd-dashboard -n yakd-dashboard
I0923 13:11:01.298289 720939 out.go:177] * Verifying registry addon...
I0923 13:11:01.301154 720939 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0923 13:11:01.303021 720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 13:11:01.337349 720939 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 13:11:01.337428 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:01.337784 720939 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0923 13:11:01.337808 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:01.542142 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 13:11:01.810938 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:01.811656 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:01.876598 720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
I0923 13:11:02.159697 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.952920258s)
I0923 13:11:02.159733 720939 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-816293"
I0923 13:11:02.159930 720939 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.984180043s)
I0923 13:11:02.163398 720939 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 13:11:02.163467 720939 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 13:11:02.166369 720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 13:11:02.168687 720939 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 13:11:02.170720 720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 13:11:02.170749 720939 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 13:11:02.173775 720939 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 13:11:02.173801 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:02.300020 720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 13:11:02.300099 720939 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 13:11:02.307811 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:02.308388 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:02.381401 720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 13:11:02.381477 720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 13:11:02.465484 720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 13:11:02.672220 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:02.805745 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:02.807065 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:03.177797 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:03.309091 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:03.310536 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:03.692589 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:03.808103 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:03.811266 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:03.923334 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.381139499s)
I0923 13:11:03.995930 720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.530340717s)
I0923 13:11:03.999377 720939 addons.go:475] Verifying addon gcp-auth=true in "addons-816293"
I0923 13:11:04.002755 720939 out.go:177] * Verifying gcp-auth addon...
I0923 13:11:04.006688 720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 13:11:04.013438 720939 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 13:11:04.171421 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:04.307626 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:04.308160 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:04.362776 720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
I0923 13:11:04.671127 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:04.805602 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:04.807429 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:05.172539 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:05.306452 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:05.308077 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:05.672265 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:05.806564 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:05.807918 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:06.186368 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:06.310200 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:06.311979 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:06.363260 720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
I0923 13:11:06.672458 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:06.805405 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:06.807063 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:07.171342 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:07.309069 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:07.309403 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:07.672182 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:07.805894 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:07.807656 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:07.863759 720939 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"True"
I0923 13:11:07.863844 720939 pod_ready.go:82] duration metric: took 8.007219627s for pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace to be "Ready" ...
I0923 13:11:07.863859 720939 pod_ready.go:39] duration metric: took 15.407830427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 13:11:07.863894 720939 api_server.go:52] waiting for apiserver process to appear ...
I0923 13:11:07.863971 720939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 13:11:07.889569 720939 api_server.go:72] duration metric: took 19.075852655s to wait for apiserver process to appear ...
I0923 13:11:07.889593 720939 api_server.go:88] waiting for apiserver healthz status ...
I0923 13:11:07.889617 720939 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0923 13:11:07.897686 720939 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0923 13:11:07.898780 720939 api_server.go:141] control plane version: v1.31.1
I0923 13:11:07.898810 720939 api_server.go:131] duration metric: took 9.209459ms to wait for apiserver health ...
I0923 13:11:07.898819 720939 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 13:11:07.908448 720939 system_pods.go:59] 17 kube-system pods found
I0923 13:11:07.908492 720939 system_pods.go:61] "coredns-7c65d6cfc9-rwnh8" [3d69dc29-1c82-4b3a-9971-f16148da1c94] Running
I0923 13:11:07.908502 720939 system_pods.go:61] "csi-hostpath-attacher-0" [fbc9849b-13fd-4116-93fd-e8f8dae194a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 13:11:07.908518 720939 system_pods.go:61] "csi-hostpath-resizer-0" [cfc38a7b-5b9f-4e7e-af30-e8877917f7e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 13:11:07.908526 720939 system_pods.go:61] "csi-hostpathplugin-c4lh2" [e0fb341e-c2bd-4695-ac48-a02a506144a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 13:11:07.908531 720939 system_pods.go:61] "etcd-addons-816293" [68360a2d-d1c5-43a9-aa74-f5a1933de24f] Running
I0923 13:11:07.908535 720939 system_pods.go:61] "kube-apiserver-addons-816293" [ec94d9f9-0507-45ca-8e6e-f79a3fc7bec7] Running
I0923 13:11:07.908539 720939 system_pods.go:61] "kube-controller-manager-addons-816293" [3ea0e19d-de51-48d0-bfa9-ea6e088fe2e9] Running
I0923 13:11:07.908546 720939 system_pods.go:61] "kube-ingress-dns-minikube" [2efd2e4d-1a47-467e-ad0e-457bae12ae22] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 13:11:07.908556 720939 system_pods.go:61] "kube-proxy-gwjn5" [0af796ff-1040-456d-97b6-df619abe545e] Running
I0923 13:11:07.908565 720939 system_pods.go:61] "kube-scheduler-addons-816293" [f43016b1-c5cf-4c34-9a8e-21107d5ef1d7] Running
I0923 13:11:07.908576 720939 system_pods.go:61] "metrics-server-84c5f94fbc-v6k5c" [47fddf7e-71ac-4304-b3a5-52200b9e861f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 13:11:07.908581 720939 system_pods.go:61] "nvidia-device-plugin-daemonset-95vmg" [0441bbd4-ba18-4999-88db-f008dcc67689] Running
I0923 13:11:07.908591 720939 system_pods.go:61] "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 13:11:07.908597 720939 system_pods.go:61] "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 13:11:07.908605 720939 system_pods.go:61] "snapshot-controller-56fcc65765-k6tz8" [ade0691a-a8fa-467c-be76-bea4c2d80355] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 13:11:07.908612 720939 system_pods.go:61] "snapshot-controller-56fcc65765-w468l" [e9a88580-956b-467b-9bc2-88466f70ce93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 13:11:07.908623 720939 system_pods.go:61] "storage-provisioner" [047ae27f-d615-4981-915a-b081568bfd65] Running
I0923 13:11:07.908631 720939 system_pods.go:74] duration metric: took 9.805532ms to wait for pod list to return data ...
I0923 13:11:07.908644 720939 default_sa.go:34] waiting for default service account to be created ...
I0923 13:11:07.911839 720939 default_sa.go:45] found service account: "default"
I0923 13:11:07.911867 720939 default_sa.go:55] duration metric: took 3.216283ms for default service account to be created ...
I0923 13:11:07.911879 720939 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 13:11:07.921800 720939 system_pods.go:86] 17 kube-system pods found
I0923 13:11:07.921882 720939 system_pods.go:89] "coredns-7c65d6cfc9-rwnh8" [3d69dc29-1c82-4b3a-9971-f16148da1c94] Running
I0923 13:11:07.921907 720939 system_pods.go:89] "csi-hostpath-attacher-0" [fbc9849b-13fd-4116-93fd-e8f8dae194a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 13:11:07.921932 720939 system_pods.go:89] "csi-hostpath-resizer-0" [cfc38a7b-5b9f-4e7e-af30-e8877917f7e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 13:11:07.921966 720939 system_pods.go:89] "csi-hostpathplugin-c4lh2" [e0fb341e-c2bd-4695-ac48-a02a506144a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 13:11:07.921992 720939 system_pods.go:89] "etcd-addons-816293" [68360a2d-d1c5-43a9-aa74-f5a1933de24f] Running
I0923 13:11:07.922013 720939 system_pods.go:89] "kube-apiserver-addons-816293" [ec94d9f9-0507-45ca-8e6e-f79a3fc7bec7] Running
I0923 13:11:07.922034 720939 system_pods.go:89] "kube-controller-manager-addons-816293" [3ea0e19d-de51-48d0-bfa9-ea6e088fe2e9] Running
I0923 13:11:07.922070 720939 system_pods.go:89] "kube-ingress-dns-minikube" [2efd2e4d-1a47-467e-ad0e-457bae12ae22] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0923 13:11:07.922094 720939 system_pods.go:89] "kube-proxy-gwjn5" [0af796ff-1040-456d-97b6-df619abe545e] Running
I0923 13:11:07.922114 720939 system_pods.go:89] "kube-scheduler-addons-816293" [f43016b1-c5cf-4c34-9a8e-21107d5ef1d7] Running
I0923 13:11:07.922140 720939 system_pods.go:89] "metrics-server-84c5f94fbc-v6k5c" [47fddf7e-71ac-4304-b3a5-52200b9e861f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 13:11:07.922170 720939 system_pods.go:89] "nvidia-device-plugin-daemonset-95vmg" [0441bbd4-ba18-4999-88db-f008dcc67689] Running
I0923 13:11:07.922199 720939 system_pods.go:89] "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 13:11:07.922220 720939 system_pods.go:89] "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 13:11:07.922242 720939 system_pods.go:89] "snapshot-controller-56fcc65765-k6tz8" [ade0691a-a8fa-467c-be76-bea4c2d80355] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 13:11:07.922277 720939 system_pods.go:89] "snapshot-controller-56fcc65765-w468l" [e9a88580-956b-467b-9bc2-88466f70ce93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 13:11:07.922301 720939 system_pods.go:89] "storage-provisioner" [047ae27f-d615-4981-915a-b081568bfd65] Running
I0923 13:11:07.922324 720939 system_pods.go:126] duration metric: took 10.432315ms to wait for k8s-apps to be running ...
I0923 13:11:07.922345 720939 system_svc.go:44] waiting for kubelet service to be running ....
I0923 13:11:07.922439 720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0923 13:11:07.936836 720939 system_svc.go:56] duration metric: took 14.481805ms WaitForService to wait for kubelet
I0923 13:11:07.936863 720939 kubeadm.go:582] duration metric: took 19.123153651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 13:11:07.936883 720939 node_conditions.go:102] verifying NodePressure condition ...
I0923 13:11:07.940643 720939 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0923 13:11:07.940674 720939 node_conditions.go:123] node cpu capacity is 2
I0923 13:11:07.940686 720939 node_conditions.go:105] duration metric: took 3.797939ms to run NodePressure ...
I0923 13:11:07.940698 720939 start.go:241] waiting for startup goroutines ...
I0923 13:11:08.174105 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:08.306693 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:08.310919 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:08.672506 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:08.807134 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:08.808752 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:09.171735 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:09.311572 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:09.313005 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:09.671299 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:09.805678 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:09.807490 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:10.172031 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:10.306350 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:10.307496 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:10.671837 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:10.808087 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:10.809146 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:11.171435 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:11.307432 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:11.308350 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:11.672107 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:11.805290 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:11.808348 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:12.172432 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:12.306839 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:12.307565 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:12.673245 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:12.807093 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:12.808628 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:13.171893 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:13.307380 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:13.308244 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:13.672264 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:13.806166 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:13.809558 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:14.171927 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:14.306058 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:14.309315 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:14.671345 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:14.806556 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:14.808021 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:15.172180 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:15.306493 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:15.309537 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:15.672105 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:15.806191 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:15.808468 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:16.173873 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:16.306866 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:16.308620 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:16.672551 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:16.807032 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:16.808920 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:17.172572 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:17.306504 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:17.307595 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 13:11:17.671552 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:17.805721 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:17.808301 720939 kapi.go:107] duration metric: took 16.505278276s to wait for kubernetes.io/minikube-addons=registry ...
I0923 13:11:18.175454 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:18.306020 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:18.677669 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:18.807995 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:19.172319 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:19.306603 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:19.670709 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:19.805883 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:20.172205 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:20.305874 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:20.671605 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:20.806625 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:21.172036 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:21.305564 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:21.671045 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:21.805991 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:22.171662 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:22.306061 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:22.672012 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:22.806034 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:23.171758 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:23.306036 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:23.673094 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:23.810121 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:24.171951 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:24.306275 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:24.671610 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:24.806473 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:25.171691 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:25.305574 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:25.672383 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:25.805458 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:26.172670 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:26.309828 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:26.673643 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:26.806021 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:27.171858 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:27.318971 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:27.677040 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:27.811349 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:28.172454 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:28.306001 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:28.672312 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:28.807423 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:29.171958 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:29.306566 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:29.671338 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:29.805667 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:30.179149 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:30.305907 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:30.671945 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:30.806940 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:31.173333 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:31.306299 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:31.671678 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:31.806192 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:32.171865 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:32.306775 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:32.671065 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:32.806980 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:33.172453 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:33.305599 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:33.671623 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:33.806795 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:34.171707 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:34.305991 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:34.671247 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:34.805387 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:35.178534 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:35.306403 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:35.681463 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:35.807100 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:36.173772 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:36.306758 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:36.674212 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:36.806221 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:37.174544 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:37.305732 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:37.672576 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:37.808315 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:38.171695 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:38.306958 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:38.671664 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:38.807048 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:39.173046 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:39.305298 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:39.671538 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:39.806541 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:40.173084 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:40.306810 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:40.687821 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:40.806889 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:41.172701 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:41.310455 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:41.670953 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:41.806236 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:42.172568 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:42.307505 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:42.672410 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:42.807745 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:43.170859 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:43.305922 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:43.672406 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:43.805994 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:44.172228 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:44.306008 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:44.672422 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:44.806422 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:45.173375 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:45.307653 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:45.672390 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:45.805916 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:46.172136 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:46.305862 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:46.671645 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:46.819026 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:47.172284 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:47.305819 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:47.672551 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:47.806653 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:48.171324 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:48.309416 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:48.672431 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:48.806841 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:49.170908 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:49.309085 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:49.672170 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:49.806069 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:50.173767 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:50.306275 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:50.673376 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:50.806955 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:51.172539 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:51.306774 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:51.673511 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:51.806715 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:52.171496 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:52.306042 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:52.677229 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:52.805671 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:53.176395 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:53.305511 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:53.672457 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:53.805436 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:54.175832 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:54.312720 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:54.670967 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 13:11:54.806112 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:55.172000 720939 kapi.go:107] duration metric: took 53.005625527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 13:11:55.308212 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:55.805696 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:56.305572 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:56.805951 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:57.306084 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:57.806091 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:58.305892 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:58.805785 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:59.305234 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:11:59.805488 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:00.350297 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:00.805893 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:01.305935 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:01.805784 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:02.306626 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:02.806178 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:03.306094 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:03.805917 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:04.310835 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:04.806245 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:05.306438 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:05.805651 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:06.306173 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:06.805304 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:07.305884 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:07.805890 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:08.306408 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:08.806777 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:09.311879 720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0923 13:12:09.806668 720939 kapi.go:107] duration metric: took 1m8.505515388s to wait for app.kubernetes.io/name=ingress-nginx ...
I0923 13:12:27.512374 720939 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 13:12:27.512402 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:28.015237 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:28.509919 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:29.011529 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:29.511274 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:30.018681 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:30.510657 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:31.011640 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:31.510805 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:32.013712 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:32.511224 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:33.011394 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:33.510227 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:34.011401 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:34.510492 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:35.016200 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:35.511476 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:36.017772 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:36.511291 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:37.014422 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:37.510763 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:38.016874 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:38.510741 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:39.011669 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:39.510482 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:40.015279 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:40.511563 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:41.016464 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:41.510970 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:42.017485 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:42.510908 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:43.012017 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:43.510624 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:44.011303 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:44.510238 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:45.016701 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:45.511279 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:46.012615 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:46.510461 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:47.011812 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:47.510044 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:48.016738 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:48.511115 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:49.010868 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:49.510665 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:50.017050 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:50.511032 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:51.012979 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:51.511304 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:52.012185 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:52.511094 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:53.013078 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:53.509908 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:54.011482 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:54.511017 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:55.017227 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:55.511183 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:56.013357 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:56.510825 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:57.012335 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:57.510388 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:58.010921 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:58.510739 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:59.011815 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:12:59.510533 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:00.043760 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:00.512534 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:01.016282 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:01.510860 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:02.013410 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:02.510733 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:03.020866 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:03.510837 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:04.013555 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:04.510540 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:05.017637 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:05.510963 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:06.015625 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:06.511101 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:07.030045 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:07.511128 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:08.011829 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:08.510550 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:09.012106 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:09.509989 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:10.022421 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:10.510641 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:11.011608 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:11.510322 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:12.013146 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:12.510902 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:13.011081 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:13.511044 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:14.013248 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:14.510483 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:15.015326 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:15.511042 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:16.013804 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:16.510763 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:17.010793 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:17.510965 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:18.013411 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:18.510508 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:19.010935 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:19.510598 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:20.025368 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:20.510464 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:21.011731 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:21.510488 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:22.011292 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:22.510429 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:23.011917 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:23.510596 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:24.014360 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:24.510164 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:25.011575 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:25.511134 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:26.014753 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:26.510775 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:27.014405 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:27.510541 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:28.011216 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:28.511225 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:29.010259 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:29.510098 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:30.011915 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:30.511120 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:31.011612 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:31.510548 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:32.011107 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:32.511438 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:33.013423 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:33.510607 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:34.011937 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:34.511406 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:35.015908 720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 13:13:35.511155 720939 kapi.go:107] duration metric: took 2m31.504575966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0923 13:13:35.513465 720939 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-816293 cluster.
I0923 13:13:35.515962 720939 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0923 13:13:35.517866 720939 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0923 13:13:35.519899 720939 out.go:177] * Enabled addons: storage-provisioner-rancher, volcano, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0923 13:13:35.521552 720939 addons.go:510] duration metric: took 2m46.707488022s for enable addons: enabled=[storage-provisioner-rancher volcano cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0923 13:13:35.521599 720939 start.go:246] waiting for cluster config update ...
I0923 13:13:35.521620 720939 start.go:255] writing updated cluster config ...
I0923 13:13:35.521893 720939 ssh_runner.go:195] Run: rm -f paused
I0923 13:13:35.860079 720939 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 13:13:35.862280 720939 out.go:177] * Done! kubectl is now configured to use "addons-816293" cluster and "default" namespace by default
==> Docker <==
Sep 23 13:22:47 addons-816293 dockerd[1285]: time="2024-09-23T13:22:47.499149427Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f538ed6cef339445 traceID=cc2baa26cc4631cd6d38b1dd019950db
Sep 23 13:22:47 addons-816293 dockerd[1285]: time="2024-09-23T13:22:47.501585854Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f538ed6cef339445 traceID=cc2baa26cc4631cd6d38b1dd019950db
Sep 23 13:22:53 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:53Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 23 13:22:54 addons-816293 dockerd[1285]: time="2024-09-23T13:22:54.861305738Z" level=info msg="ignoring event" container=7668f0ceedb8f4ec2752e9ea660771227eeea826142f518ca7c510d180ecc107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:22:55 addons-816293 dockerd[1285]: time="2024-09-23T13:22:55.312383133Z" level=info msg="ignoring event" container=00177226599cc99a0b0b1a06432e1fd941a947505cd8bf04d9c7ef879735e76f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:22:55 addons-816293 dockerd[1285]: time="2024-09-23T13:22:55.469836803Z" level=info msg="ignoring event" container=88808b249d7bbbca660ced9ac38e50200e807e0ff1e94030f7dbbafd5c0ec2c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:22:55 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83024b1861650a1bf588c0c43be28e03ef0e0f4a30a60d47c62b5c8eb0698db4/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 23 13:22:56 addons-816293 dockerd[1285]: time="2024-09-23T13:22:56.032122262Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=f10cd9597e0ce995 traceID=1c3c1ec305c29342c5f8a2a907ee94e1
Sep 23 13:22:56 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:56Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 23 13:22:56 addons-816293 dockerd[1285]: time="2024-09-23T13:22:56.750804799Z" level=info msg="ignoring event" container=db417b9e5803bf9e4963bbccc8b738f6accfabc6cd674738370a26ffd3f59c7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:22:58 addons-816293 dockerd[1285]: time="2024-09-23T13:22:58.789101074Z" level=info msg="ignoring event" container=83024b1861650a1bf588c0c43be28e03ef0e0f4a30a60d47c62b5c8eb0698db4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:00 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a964d58d99a33d1a75c4ba8703360cc3bd6aa5467902b5a8efff40c6a716044f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 23 13:23:01 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:01Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
Sep 23 13:23:01 addons-816293 dockerd[1285]: time="2024-09-23T13:23:01.759623170Z" level=info msg="ignoring event" container=3b8eb6ddc0f11ca8889ffedd16f98154678dbbb07d35b894b2af4d8a7c5ef314 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:02 addons-816293 dockerd[1285]: time="2024-09-23T13:23:02.993713477Z" level=info msg="ignoring event" container=a964d58d99a33d1a75c4ba8703360cc3bd6aa5467902b5a8efff40c6a716044f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:04 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b239ddbcd9391811402d602e085c02cb8ac091d0e99d376d8f04685d8cc0c8fd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 23 13:23:04 addons-816293 dockerd[1285]: time="2024-09-23T13:23:04.850782219Z" level=info msg="ignoring event" container=47d3997eaac56348674539dc0eae3d489623bd0c173c714bb3005fe952c52ec5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:06 addons-816293 dockerd[1285]: time="2024-09-23T13:23:06.078118788Z" level=info msg="ignoring event" container=b239ddbcd9391811402d602e085c02cb8ac091d0e99d376d8f04685d8cc0c8fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:16 addons-816293 dockerd[1285]: time="2024-09-23T13:23:16.494117357Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=55ff5c9b707c97b2 traceID=7cca6af5846fe79502952676d7653b2e
Sep 23 13:23:16 addons-816293 dockerd[1285]: time="2024-09-23T13:23:16.497101917Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=55ff5c9b707c97b2 traceID=7cca6af5846fe79502952676d7653b2e
Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.008758007Z" level=info msg="ignoring event" container=4fc7be12e52b17854c5da685e7ad5670fffb6f2f4640a8c872b292629a89ad62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.743112541Z" level=info msg="ignoring event" container=7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.743161460Z" level=info msg="ignoring event" container=305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.946523914Z" level=info msg="ignoring event" container=86255d787bbf10cf324d74fb05bbd2766725736b24518c28900784907494c5db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.995768009Z" level=info msg="ignoring event" container=2ae468bb920e1bfee11a176fdcbc4aa0c4d8bfbb3bbfa20b7c195b7d7465d4b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
47d3997eaac56 fc9db2894f4e4 29 seconds ago Exited helper-pod 0 b239ddbcd9391 helper-pod-delete-pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4
3b8eb6ddc0f11 busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140 32 seconds ago Exited busybox 0 a964d58d99a33 test-local-path
db417b9e5803b busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 37 seconds ago Exited helper-pod 0 83024b1861650 helper-pod-create-pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4
88808b249d7bb ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 40 seconds ago Exited gadget 7 37083df50b06c gadget-7v9cd
3584aca3bba88 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 1f9e4bfc2fdd9 gcp-auth-89d5ffd79-2v88k
029872f6b5cdf registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 46661f5ee0e4a ingress-nginx-controller-bc57996ff-s62wl
274979a9fe0e9 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 11 minutes ago Running csi-snapshotter 0 16b075d8daec5 csi-hostpathplugin-c4lh2
43a0658c6a738 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 11 minutes ago Running csi-provisioner 0 16b075d8daec5 csi-hostpathplugin-c4lh2
d02b14cda814e registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 11 minutes ago Running liveness-probe 0 16b075d8daec5 csi-hostpathplugin-c4lh2
45f822a835caa registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 11 minutes ago Running hostpath 0 16b075d8daec5 csi-hostpathplugin-c4lh2
f5bd230d2455b registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 11 minutes ago Running node-driver-registrar 0 16b075d8daec5 csi-hostpathplugin-c4lh2
4a6d829ad645e registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 11 minutes ago Running csi-resizer 0 e3a62a4a2f255 csi-hostpath-resizer-0
24fbfd17d5243 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 11 minutes ago Running csi-external-health-monitor-controller 0 16b075d8daec5 csi-hostpathplugin-c4lh2
d0d6116c976a3 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 11 minutes ago Running csi-attacher 0 9e888dd2d0cdc csi-hostpath-attacher-0
b086b738b7c6e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 a2a807762b8df ingress-nginx-admission-patch-ftkgz
4f1f570cced09 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 b37e3d212caf2 ingress-nginx-admission-create-w5qhg
4c45f509a097f registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 11 minutes ago Running volume-snapshot-controller 0 d620a1b72fbd2 snapshot-controller-56fcc65765-k6tz8
35da5d5624d7e registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 11 minutes ago Running volume-snapshot-controller 0 a712d2cbd914c snapshot-controller-56fcc65765-w468l
8f5e8f334c8ac rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 8bf5c192f0896 local-path-provisioner-86d989889c-j9gpc
7db6e438b0f08 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 850c3c8d32af6 metrics-server-84c5f94fbc-v6k5c
68f2ed892c36d gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 63e882378472e kube-ingress-dns-minikube
7a2b773ac9dfe gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 21d8d148c1eeb cloud-spanner-emulator-5b584cc74-v58d6
65ecee846c2c5 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 c7e3ccd0c5124 storage-provisioner
26bc8fd7126ec 2f6c962e7b831 12 minutes ago Running coredns 0 bad32c70ad21e coredns-7c65d6cfc9-rwnh8
324593818f525 24a140c548c07 12 minutes ago Running kube-proxy 0 953326dd498b9 kube-proxy-gwjn5
264f46b7575fb 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 bd9f4a6207b63 kube-scheduler-addons-816293
03006510c8a1e d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 fda909103c022 kube-apiserver-addons-816293
bfd0f71f456d2 279f381cb3736 12 minutes ago Running kube-controller-manager 0 c8ef304b7d93e kube-controller-manager-addons-816293
7fdbee2111413 27e3830e14027 12 minutes ago Running etcd 0 4df52ac6362f8 etcd-addons-816293
==> controller_ingress [029872f6b5cd] <==
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
W0923 13:12:09.184703 7 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0923 13:12:09.184845 7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0923 13:12:09.199900 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
I0923 13:12:09.506695 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0923 13:12:09.549903 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0923 13:12:09.562273 7 nginx.go:271] "Starting NGINX Ingress controller"
I0923 13:12:09.583415 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bb62a6af-c7f1-4854-9523-4741fa21b40e", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0923 13:12:09.587445 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0eb70592-0188-41cb-8da2-74243fd19f81", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0923 13:12:09.588026 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"9643e1a1-73c3-405d-be82-80c979875247", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0923 13:12:10.764288 7 nginx.go:317] "Starting NGINX process"
I0923 13:12:10.764532 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0923 13:12:10.766456 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0923 13:12:10.766991 7 controller.go:193] "Configuration changes detected, backend reload required"
I0923 13:12:10.787467 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0923 13:12:10.787730 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-s62wl"
I0923 13:12:10.794219 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-s62wl" node="addons-816293"
I0923 13:12:10.812213 7 controller.go:213] "Backend successfully reloaded"
I0923 13:12:10.812511 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0923 13:12:10.812643 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-s62wl", UID:"29821d5d-7904-491d-a7ff-bd0e0644ae09", APIVersion:"v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [26bc8fd7126e] <==
[INFO] 10.244.0.7:34067 - 9749 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107052s
[INFO] 10.244.0.7:34812 - 57063 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002680123s
[INFO] 10.244.0.7:34812 - 37346 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002523842s
[INFO] 10.244.0.7:42141 - 59818 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166087s
[INFO] 10.244.0.7:42141 - 42153 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137123s
[INFO] 10.244.0.7:54786 - 13928 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198456s
[INFO] 10.244.0.7:54786 - 15510 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133989s
[INFO] 10.244.0.7:49660 - 41606 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109923s
[INFO] 10.244.0.7:49660 - 2179 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040082s
[INFO] 10.244.0.7:35772 - 12621 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090855s
[INFO] 10.244.0.7:35772 - 19787 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039031s
[INFO] 10.244.0.7:49288 - 41378 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002303323s
[INFO] 10.244.0.7:49288 - 42943 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002648264s
[INFO] 10.244.0.7:58628 - 45768 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081189s
[INFO] 10.244.0.7:58628 - 61383 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010299s
[INFO] 10.244.0.25:35480 - 44193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200311s
[INFO] 10.244.0.25:37735 - 28043 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186583s
[INFO] 10.244.0.25:43220 - 59356 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108717s
[INFO] 10.244.0.25:58980 - 58863 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065993s
[INFO] 10.244.0.25:47844 - 42780 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098748s
[INFO] 10.244.0.25:36647 - 9954 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088598s
[INFO] 10.244.0.25:47597 - 42144 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002161608s
[INFO] 10.244.0.25:44744 - 13618 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002070237s
[INFO] 10.244.0.25:55206 - 26399 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001733714s
[INFO] 10.244.0.25:45039 - 27901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001583799s
==> describe nodes <==
Name: addons-816293
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-816293
kubernetes.io/os=linux
minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
minikube.k8s.io/name=addons-816293
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T13_10_44_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-816293
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-816293"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 13:10:40 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-816293
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 13:23:31 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 13:23:18 +0000 Mon, 23 Sep 2024 13:10:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 13:23:18 +0000 Mon, 23 Sep 2024 13:10:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 13:23:18 +0000 Mon, 23 Sep 2024 13:10:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 13:23:18 +0000 Mon, 23 Sep 2024 13:10:41 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-816293
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 189321b7b41a49ecb3cbdf57a41c9ca7
System UUID: c0380f16-60fd-4321-84ff-494177588bf5
Boot ID: a368a3b9-64b6-4915-adf4-926cc803503e
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.3.0
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default cloud-spanner-emulator-5b584cc74-v58d6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-7v9cd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-2v88k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-s62wl 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-rwnh8 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpathplugin-c4lh2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system etcd-addons-816293 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-816293 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-816293 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-gwjn5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-816293 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-v6k5c 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-k6tz8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-w468l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-j9gpc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-816293 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-816293 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-816293 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-816293 event: Registered Node addons-816293 in Controller
==> dmesg <==
[Sep23 12:41] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
[ +0.214721] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
[ +0.310277] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
==> etcd [7fdbee211141] <==
{"level":"info","ts":"2024-09-23T13:10:37.343567Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-23T13:10:37.343578Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-23T13:10:37.707811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-23T13:10:37.708069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-23T13:10:37.708230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-23T13:10:37.708320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-23T13:10:37.708408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T13:10:37.708504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-23T13:10:37.708609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-23T13:10:37.713082Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T13:10:37.717133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-816293 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-23T13:10:37.717403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T13:10:37.717895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T13:10:37.718188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-23T13:10:37.718289Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-23T13:10:37.719054Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T13:10:37.727465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-23T13:10:37.725077Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T13:10:37.727859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T13:10:37.727963Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T13:10:37.726458Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T13:10:37.737137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-23T13:20:38.293461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1849}
{"level":"info","ts":"2024-09-23T13:20:38.357061Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1849,"took":"63.067825ms","hash":505204955,"current-db-size-bytes":8933376,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4931584,"current-db-size-in-use":"4.9 MB"}
{"level":"info","ts":"2024-09-23T13:20:38.357107Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":505204955,"revision":1849,"compact-revision":-1}
==> gcp-auth [3584aca3bba8] <==
2024/09/23 13:13:34 GCP Auth Webhook started!
2024/09/23 13:13:52 Ready to marshal response ...
2024/09/23 13:13:52 Ready to write response ...
2024/09/23 13:13:52 Ready to marshal response ...
2024/09/23 13:13:52 Ready to write response ...
2024/09/23 13:14:17 Ready to marshal response ...
2024/09/23 13:14:17 Ready to write response ...
2024/09/23 13:14:17 Ready to marshal response ...
2024/09/23 13:14:17 Ready to write response ...
2024/09/23 13:14:17 Ready to marshal response ...
2024/09/23 13:14:17 Ready to write response ...
2024/09/23 13:22:21 Ready to marshal response ...
2024/09/23 13:22:21 Ready to write response ...
2024/09/23 13:22:21 Ready to marshal response ...
2024/09/23 13:22:21 Ready to write response ...
2024/09/23 13:22:21 Ready to marshal response ...
2024/09/23 13:22:21 Ready to write response ...
2024/09/23 13:22:31 Ready to marshal response ...
2024/09/23 13:22:31 Ready to write response ...
2024/09/23 13:22:55 Ready to marshal response ...
2024/09/23 13:22:55 Ready to write response ...
2024/09/23 13:22:55 Ready to marshal response ...
2024/09/23 13:22:55 Ready to write response ...
2024/09/23 13:23:04 Ready to marshal response ...
2024/09/23 13:23:04 Ready to write response ...
==> kernel <==
13:23:34 up 3:06, 0 users, load average: 0.81, 0.77, 1.48
Linux addons-816293 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [03006510c8a1] <==
E0923 13:13:06.992214 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.42.25:443: connect: connection refused" logger="UnhandledError"
W0923 13:13:07.037379 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.42.25:443: connect: connection refused
E0923 13:13:07.037425 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.42.25:443: connect: connection refused" logger="UnhandledError"
I0923 13:13:52.419920 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0923 13:13:52.452109 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0923 13:14:07.279004 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0923 13:14:07.352476 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
E0923 13:14:07.621951 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
I0923 13:14:07.721138 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 13:14:07.743767 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
E0923 13:14:07.812209 1 watch.go:250] "Unhandled Error" err="write tcp 192.168.49.2:8443->10.244.0.16:50960: write: connection reset by peer" logger="UnhandledError"
I0923 13:14:07.884628 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0923 13:14:07.923620 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 13:14:08.113434 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 13:14:08.173518 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 13:14:08.353854 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0923 13:14:08.452461 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0923 13:14:08.858884 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0923 13:14:08.924724 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0923 13:14:08.945972 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0923 13:14:09.024185 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0923 13:14:09.413014 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0923 13:14:09.539604 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0923 13:22:21.438806 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.29.145"}
E0923 13:23:20.074681 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
==> kube-controller-manager [bfd0f71f456d] <==
W0923 13:22:34.337066 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:22:34.337112 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 13:22:39.272097 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:22:39.272143 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 13:22:42.269702 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
I0923 13:22:43.935271 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.664µs"
I0923 13:22:47.473267 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-816293"
I0923 13:22:54.067390 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
W0923 13:22:58.042054 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:22:58.042099 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 13:23:04.586640 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:04.586681 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 13:23:04.664576 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="8.689µs"
W0923 13:23:08.335864 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:08.335907 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 13:23:10.445700 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:10.445747 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 13:23:16.444595 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:16.444644 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 13:23:17.132563 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:17.132606 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 13:23:18.477845 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-816293"
I0923 13:23:32.603010 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.226µs"
W0923 13:23:33.226377 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 13:23:33.226432 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [324593818f52] <==
I0923 13:10:48.679792 1 server_linux.go:66] "Using iptables proxy"
I0923 13:10:48.880659 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0923 13:10:48.880729 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 13:10:48.921053 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 13:10:48.921291 1 server_linux.go:169] "Using iptables Proxier"
I0923 13:10:48.923508 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 13:10:48.924391 1 server.go:483] "Version info" version="v1.31.1"
I0923 13:10:48.925096 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 13:10:48.956295 1 config.go:199] "Starting service config controller"
I0923 13:10:48.956531 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 13:10:48.956699 1 config.go:105] "Starting endpoint slice config controller"
I0923 13:10:48.956793 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 13:10:48.961108 1 config.go:328] "Starting node config controller"
I0923 13:10:48.961296 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 13:10:49.057264 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0923 13:10:49.057325 1 shared_informer.go:320] Caches are synced for service config
I0923 13:10:49.070322 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [264f46b7575f] <==
W0923 13:10:41.513614 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 13:10:41.513641 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.513697 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 13:10:41.513708 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.513875 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 13:10:41.513902 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.513971 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0923 13:10:41.513989 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514050 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0923 13:10:41.514065 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514137 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 13:10:41.514152 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514210 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0923 13:10:41.514225 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514266 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0923 13:10:41.514280 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514450 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0923 13:10:41.514594 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0923 13:10:41.514619 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.514641 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0923 13:10:41.514653 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0923 13:10:41.514675 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 13:10:41.515021 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0923 13:10:41.515047 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0923 13:10:42.805901 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 23 13:23:25 addons-816293 kubelet[2339]: I0923 13:23:25.307700 2339 scope.go:117] "RemoveContainer" containerID="88808b249d7bbbca660ced9ac38e50200e807e0ff1e94030f7dbbafd5c0ec2c9"
Sep 23 13:23:25 addons-816293 kubelet[2339]: E0923 13:23:25.307985 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7v9cd_gadget(d6a0eec7-6353-45b0-b40c-7d1b00387139)\"" pod="gadget/gadget-7v9cd" podUID="d6a0eec7-6353-45b0-b40c-7d1b00387139"
Sep 23 13:23:25 addons-816293 kubelet[2339]: E0923 13:23:25.310718 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3365eeb6-9e62-4b5a-917e-979eac5a9b59"
Sep 23 13:23:28 addons-816293 kubelet[2339]: E0923 13:23:28.311294 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.129645 2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds\") pod \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\" (UID: \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\") "
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.130191 2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866mw\" (UniqueName: \"kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw\") pod \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\" (UID: \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\") "
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.130130 2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" (UID: "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.136835 2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw" (OuterVolumeSpecName: "kube-api-access-866mw") pod "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" (UID: "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"). InnerVolumeSpecName "kube-api-access-866mw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.230658 2339 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds\") on node \"addons-816293\" DevicePath \"\""
Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.230700 2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-866mw\" (UniqueName: \"kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw\") on node \"addons-816293\" DevicePath \"\""
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.140658 2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2rj\" (UniqueName: \"kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj\") pod \"ec93b34f-db00-4bde-8ed0-46a67564f5cc\" (UID: \"ec93b34f-db00-4bde-8ed0-46a67564f5cc\") "
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.140712 2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxgl\" (UniqueName: \"kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl\") pod \"7b435d50-4b55-4c70-b6d9-b0e1fd522370\" (UID: \"7b435d50-4b55-4c70-b6d9-b0e1fd522370\") "
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.145861 2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl" (OuterVolumeSpecName: "kube-api-access-ptxgl") pod "7b435d50-4b55-4c70-b6d9-b0e1fd522370" (UID: "7b435d50-4b55-4c70-b6d9-b0e1fd522370"). InnerVolumeSpecName "kube-api-access-ptxgl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.150981 2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj" (OuterVolumeSpecName: "kube-api-access-wn2rj") pod "ec93b34f-db00-4bde-8ed0-46a67564f5cc" (UID: "ec93b34f-db00-4bde-8ed0-46a67564f5cc"). InnerVolumeSpecName "kube-api-access-wn2rj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.241100 2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wn2rj\" (UniqueName: \"kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj\") on node \"addons-816293\" DevicePath \"\""
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.241153 2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ptxgl\" (UniqueName: \"kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl\") on node \"addons-816293\" DevicePath \"\""
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.325648 2339 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" path="/var/lib/kubelet/pods/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2/volumes"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.451361 2339 scope.go:117] "RemoveContainer" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.532231 2339 scope.go:117] "RemoveContainer" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
Sep 23 13:23:33 addons-816293 kubelet[2339]: E0923 13:23:33.533805 2339 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.533850 2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"} err="failed to get container status \"7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.533887 2339 scope.go:117] "RemoveContainer" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.568138 2339 scope.go:117] "RemoveContainer" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
Sep 23 13:23:33 addons-816293 kubelet[2339]: E0923 13:23:33.569342 2339 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.569383 2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"} err="failed to get container status \"305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
==> storage-provisioner [65ecee846c2c] <==
I0923 13:10:54.957657 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 13:10:54.983775 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 13:10:54.983816 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 13:10:54.998064 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 13:10:55.000412 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785!
I0923 13:10:55.005719 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea18252a-3635-419b-b449-ef6bd3393b94", APIVersion:"v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785 became leader
I0923 13:10:55.100907 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-816293 -n addons-816293
helpers_test.go:261: (dbg) Run: kubectl --context addons-816293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc: exit status 1 (100.714307ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-816293/192.168.49.2
Start Time: Mon, 23 Sep 2024 13:14:17 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nlm7 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-4nlm7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m18s default-scheduler Successfully assigned default/busybox to addons-816293
Normal Pulling 7m48s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m48s (x4 over 9m17s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m48s (x4 over 9m17s) kubelet Error: ErrImagePull
Warning Failed 7m34s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m6s (x21 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-w5qhg" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-ftkgz" not found
Error from server (NotFound): pods "local-path-provisioner-86d989889c-j9gpc" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.69s)