=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.054369ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004242145s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006789781s
addons_test.go:342: (dbg) Run: kubectl --context addons-018527 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126547687s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-018527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-018527 ip
2024/09/10 17:43:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-018527 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-018527
helpers_test.go:235: (dbg) docker inspect addons-018527:
-- stdout --
[
{
"Id": "405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef",
"Created": "2024-09-10T17:30:01.57169032Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8781,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-10T17:30:01.774458637Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:a4261f15fdf40db09c0b78a1feabe6bd85433327166d5c98909d23a556dff45f",
"ResolvConfPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/hostname",
"HostsPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/hosts",
"LogPath": "/var/lib/docker/containers/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef/405e529c548a99af672ca10e89702c4fe13632e86f76ce3acc1574eaece0cfef-json.log",
"Name": "/addons-018527",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-018527:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-018527",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617-init/diff:/var/lib/docker/overlay2/8cfe895502caa769e65b1686e7e1e919ac585a6fa1d0a386b9d76045d1757d52/diff",
"MergedDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/merged",
"UpperDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/diff",
"WorkDir": "/var/lib/docker/overlay2/726205eeae3e240a8f28a09301f6d9f73b3bd4960088ca4b8ceae3919c627617/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-018527",
"Source": "/var/lib/docker/volumes/addons-018527/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-018527",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-018527",
"name.minikube.sigs.k8s.io": "addons-018527",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2850c2cb8efd269102daae53cea680dc35aa5f039b665837eb72ca69f1fe2223",
"SandboxKey": "/var/run/docker/netns/2850c2cb8efd",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-018527": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "de2828d51f4c167fea23931843cb56718b83027887c3a3a825b8d99f09967148",
"EndpointID": "4e4507f1bb5b80e9dc772124c3e95db3b0258bfbad1281a8354a01d91ca100c8",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-018527",
"405e529c548a"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-018527 -n addons-018527
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-018527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-018527 logs -n 25: (1.483602904s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| delete | --all | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| delete | -p download-only-933311 | download-only-933311 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| start | -o=json --download-only | download-only-643138 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | |
| | -p download-only-643138 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| delete | -p download-only-643138 | download-only-643138 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| delete | -p download-only-933311 | download-only-933311 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| delete | -p download-only-643138 | download-only-643138 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| start | --download-only -p | download-docker-686092 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | |
| | download-docker-686092 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-686092 | download-docker-686092 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| start | --download-only -p | binary-mirror-558808 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | |
| | binary-mirror-558808 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:38421 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-558808 | binary-mirror-558808 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
| addons | enable dashboard -p | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | |
| | addons-018527 | | | | | |
| addons | disable dashboard -p | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | |
| | addons-018527 | | | | | |
| start | -p addons-018527 --wait=true | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:33 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-018527 addons disable | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:33 UTC | 10 Sep 24 17:34 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-018527 addons | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-018527 addons | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-018527 addons | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
| | addons-018527 | | | | | |
| ssh | addons-018527 ssh curl -s | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-018527 ip | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
| addons | addons-018527 addons disable | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-018527 addons disable | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | |
| | ingress --alsologtostderr -v=1 | | | | | |
| ip | addons-018527 ip | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
| addons | addons-018527 addons disable | addons-018527 | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/10 17:29:36
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0910 17:29:36.606140 8286 out.go:345] Setting OutFile to fd 1 ...
I0910 17:29:36.606546 8286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:29:36.606560 8286 out.go:358] Setting ErrFile to fd 2...
I0910 17:29:36.606566 8286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:29:36.606925 8286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-2209/.minikube/bin
I0910 17:29:36.607607 8286 out.go:352] Setting JSON to false
I0910 17:29:36.608327 8286 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":724,"bootTime":1725988653,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0910 17:29:36.608397 8286 start.go:139] virtualization:
I0910 17:29:36.612515 8286 out.go:177] * [addons-018527] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0910 17:29:36.614669 8286 out.go:177] - MINIKUBE_LOCATION=19598
I0910 17:29:36.614727 8286 notify.go:220] Checking for updates...
I0910 17:29:36.618427 8286 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0910 17:29:36.620428 8286 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19598-2209/kubeconfig
I0910 17:29:36.622361 8286 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-2209/.minikube
I0910 17:29:36.624300 8286 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0910 17:29:36.626166 8286 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0910 17:29:36.628241 8286 driver.go:394] Setting default libvirt URI to qemu:///system
I0910 17:29:36.661206 8286 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0910 17:29:36.661313 8286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0910 17:29:36.725060 8286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:36.714989787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0910 17:29:36.725167 8286 docker.go:318] overlay module found
I0910 17:29:36.729046 8286 out.go:177] * Using the docker driver based on user configuration
I0910 17:29:36.730988 8286 start.go:297] selected driver: docker
I0910 17:29:36.731009 8286 start.go:901] validating driver "docker" against <nil>
I0910 17:29:36.731024 8286 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0910 17:29:36.731681 8286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0910 17:29:36.785829 8286 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-10 17:29:36.77658729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0910 17:29:36.785986 8286 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0910 17:29:36.786220 8286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0910 17:29:36.788509 8286 out.go:177] * Using Docker driver with root privileges
I0910 17:29:36.790519 8286 cni.go:84] Creating CNI manager for ""
I0910 17:29:36.790551 8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0910 17:29:36.790563 8286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0910 17:29:36.790657 8286 start.go:340] cluster config:
{Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0910 17:29:36.792861 8286 out.go:177] * Starting "addons-018527" primary control-plane node in "addons-018527" cluster
I0910 17:29:36.794998 8286 cache.go:121] Beginning downloading kic base image for docker with docker
I0910 17:29:36.797164 8286 out.go:177] * Pulling base image v0.0.45-1725963390-19606 ...
I0910 17:29:36.799222 8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0910 17:29:36.799259 8286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local docker daemon
I0910 17:29:36.799281 8286 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0910 17:29:36.799298 8286 cache.go:56] Caching tarball of preloaded images
I0910 17:29:36.799380 8286 preload.go:172] Found /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0910 17:29:36.799390 8286 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0910 17:29:36.799724 8286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json ...
I0910 17:29:36.799749 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json: {Name:mk124bf20b951e096c327decf76be8ea8a9c9f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:29:36.815692 8286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 to local cache
I0910 17:29:36.815865 8286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory
I0910 17:29:36.815888 8286 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 in local cache directory, skipping pull
I0910 17:29:36.815899 8286 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 exists in cache, skipping pull
I0910 17:29:36.815907 8286 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 as a tarball
I0910 17:29:36.815913 8286 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from local cache
I0910 17:29:54.607314 8286 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 from cached tarball
I0910 17:29:54.607351 8286 cache.go:194] Successfully downloaded all kic artifacts
I0910 17:29:54.607379 8286 start.go:360] acquireMachinesLock for addons-018527: {Name:mkd0ce81edb47e790f272bf643f50e7d96e61889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0910 17:29:54.607488 8286 start.go:364] duration metric: took 88.41µs to acquireMachinesLock for "addons-018527"
I0910 17:29:54.607513 8286 start.go:93] Provisioning new machine with config: &{Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0910 17:29:54.607600 8286 start.go:125] createHost starting for "" (driver="docker")
I0910 17:29:54.610214 8286 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0910 17:29:54.610594 8286 start.go:159] libmachine.API.Create for "addons-018527" (driver="docker")
I0910 17:29:54.610631 8286 client.go:168] LocalClient.Create starting
I0910 17:29:54.610737 8286 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem
I0910 17:29:54.844752 8286 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem
I0910 17:29:55.307671 8286 cli_runner.go:164] Run: docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0910 17:29:55.321747 8286 cli_runner.go:211] docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0910 17:29:55.321835 8286 network_create.go:284] running [docker network inspect addons-018527] to gather additional debugging logs...
I0910 17:29:55.321856 8286 cli_runner.go:164] Run: docker network inspect addons-018527
W0910 17:29:55.337944 8286 cli_runner.go:211] docker network inspect addons-018527 returned with exit code 1
I0910 17:29:55.337976 8286 network_create.go:287] error running [docker network inspect addons-018527]: docker network inspect addons-018527: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-018527 not found
I0910 17:29:55.337988 8286 network_create.go:289] output of [docker network inspect addons-018527]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-018527 not found
** /stderr **
I0910 17:29:55.338076 8286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0910 17:29:55.355766 8286 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017691c0}
I0910 17:29:55.355805 8286 network_create.go:124] attempt to create docker network addons-018527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0910 17:29:55.355863 8286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-018527 addons-018527
I0910 17:29:55.427742 8286 network_create.go:108] docker network addons-018527 192.168.49.0/24 created
I0910 17:29:55.427770 8286 kic.go:121] calculated static IP "192.168.49.2" for the "addons-018527" container
I0910 17:29:55.427837 8286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0910 17:29:55.443851 8286 cli_runner.go:164] Run: docker volume create addons-018527 --label name.minikube.sigs.k8s.io=addons-018527 --label created_by.minikube.sigs.k8s.io=true
I0910 17:29:55.461983 8286 oci.go:103] Successfully created a docker volume addons-018527
I0910 17:29:55.462081 8286 cli_runner.go:164] Run: docker run --rm --name addons-018527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --entrypoint /usr/bin/test -v addons-018527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib
I0910 17:29:57.661341 8286 cli_runner.go:217] Completed: docker run --rm --name addons-018527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --entrypoint /usr/bin/test -v addons-018527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -d /var/lib: (2.199207811s)
I0910 17:29:57.661369 8286 oci.go:107] Successfully prepared a docker volume addons-018527
I0910 17:29:57.661399 8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0910 17:29:57.661417 8286 kic.go:194] Starting extracting preloaded images to volume ...
I0910 17:29:57.661500 8286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-018527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir
I0910 17:30:01.497417 8286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19598-2209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-018527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 -I lz4 -xf /preloaded.tar -C /extractDir: (3.835879022s)
I0910 17:30:01.497452 8286 kic.go:203] duration metric: took 3.83603144s to extract preloaded images to volume ...
W0910 17:30:01.497647 8286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0910 17:30:01.497795 8286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0910 17:30:01.554442 8286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-018527 --name addons-018527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-018527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-018527 --network addons-018527 --ip 192.168.49.2 --volume addons-018527:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9
I0910 17:30:01.946009 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Running}}
I0910 17:30:01.970444 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:01.997379 8286 cli_runner.go:164] Run: docker exec addons-018527 stat /var/lib/dpkg/alternatives/iptables
I0910 17:30:02.097701 8286 oci.go:144] the created container "addons-018527" has a running status.
I0910 17:30:02.097738 8286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa...
I0910 17:30:02.455924 8286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0910 17:30:02.486696 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:02.504718 8286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0910 17:30:02.504737 8286 kic_runner.go:114] Args: [docker exec --privileged addons-018527 chown docker:docker /home/docker/.ssh/authorized_keys]
I0910 17:30:02.557819 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:02.583882 8286 machine.go:93] provisionDockerMachine start ...
I0910 17:30:02.583971 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:02.606353 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:02.606634 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:02.606648 8286 main.go:141] libmachine: About to run SSH command:
hostname
I0910 17:30:02.758072 8286 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018527
I0910 17:30:02.758137 8286 ubuntu.go:169] provisioning hostname "addons-018527"
I0910 17:30:02.758236 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:02.777322 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:02.777572 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:02.777588 8286 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-018527 && echo "addons-018527" | sudo tee /etc/hostname
I0910 17:30:02.938054 8286 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018527
I0910 17:30:02.938173 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:02.960580 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:02.960845 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:02.960873 8286 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-018527' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-018527/g' /etc/hosts;
else
echo '127.0.1.1 addons-018527' | sudo tee -a /etc/hosts;
fi
fi
I0910 17:30:03.123157 8286 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0910 17:30:03.123236 8286 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19598-2209/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-2209/.minikube}
I0910 17:30:03.123274 8286 ubuntu.go:177] setting up certificates
I0910 17:30:03.123314 8286 provision.go:84] configureAuth start
I0910 17:30:03.123438 8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
I0910 17:30:03.144259 8286 provision.go:143] copyHostCerts
I0910 17:30:03.144350 8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/ca.pem (1082 bytes)
I0910 17:30:03.144482 8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/cert.pem (1123 bytes)
I0910 17:30:03.144545 8286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-2209/.minikube/key.pem (1679 bytes)
I0910 17:30:03.144595 8286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem org=jenkins.addons-018527 san=[127.0.0.1 192.168.49.2 addons-018527 localhost minikube]
I0910 17:30:03.767147 8286 provision.go:177] copyRemoteCerts
I0910 17:30:03.767218 8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0910 17:30:03.767295 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:03.788041 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:03.883281 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0910 17:30:03.907885 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0910 17:30:03.933405 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0910 17:30:03.957786 8286 provision.go:87] duration metric: took 834.441741ms to configureAuth
I0910 17:30:03.957816 8286 ubuntu.go:193] setting minikube options for container-runtime
I0910 17:30:03.958061 8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:30:03.958134 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:03.978858 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:03.979105 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:03.979120 8286 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0910 17:30:04.130984 8286 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0910 17:30:04.131004 8286 ubuntu.go:71] root file system type: overlay
I0910 17:30:04.131165 8286 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0910 17:30:04.131240 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:04.150771 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:04.151124 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:04.151224 8286 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0910 17:30:04.290915 8286 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0910 17:30:04.291019 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:04.310270 8286 main.go:141] libmachine: Using SSH client type: native
I0910 17:30:04.310557 8286 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0910 17:30:04.310579 8286 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0910 17:30:05.175971 8286 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:36.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-10 17:30:04.284752176 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0910 17:30:05.176005 8286 machine.go:96] duration metric: took 2.592104047s to provisionDockerMachine
I0910 17:30:05.176018 8286 client.go:171] duration metric: took 10.565379685s to LocalClient.Create
I0910 17:30:05.176053 8286 start.go:167] duration metric: took 10.565460424s to libmachine.API.Create "addons-018527"
I0910 17:30:05.176068 8286 start.go:293] postStartSetup for "addons-018527" (driver="docker")
I0910 17:30:05.176080 8286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0910 17:30:05.176153 8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0910 17:30:05.176200 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:05.195275 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:05.287709 8286 ssh_runner.go:195] Run: cat /etc/os-release
I0910 17:30:05.290818 8286 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0910 17:30:05.290855 8286 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0910 17:30:05.290867 8286 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0910 17:30:05.290873 8286 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0910 17:30:05.290883 8286 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-2209/.minikube/addons for local assets ...
I0910 17:30:05.290950 8286 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-2209/.minikube/files for local assets ...
I0910 17:30:05.290978 8286 start.go:296] duration metric: took 114.904348ms for postStartSetup
I0910 17:30:05.291278 8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
I0910 17:30:05.308350 8286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/config.json ...
I0910 17:30:05.308644 8286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0910 17:30:05.308694 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:05.325782 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:05.415635 8286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0910 17:30:05.420357 8286 start.go:128] duration metric: took 10.812741817s to createHost
I0910 17:30:05.420379 8286 start.go:83] releasing machines lock for "addons-018527", held for 10.812882633s
I0910 17:30:05.420464 8286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-018527
I0910 17:30:05.438439 8286 ssh_runner.go:195] Run: cat /version.json
I0910 17:30:05.438492 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:05.438536 8286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0910 17:30:05.438596 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:05.457736 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:05.459134 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:05.676178 8286 ssh_runner.go:195] Run: systemctl --version
I0910 17:30:05.680631 8286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0910 17:30:05.685645 8286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0910 17:30:05.713336 8286 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0910 17:30:05.713448 8286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0910 17:30:05.744284 8286 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0910 17:30:05.744359 8286 start.go:495] detecting cgroup driver to use...
I0910 17:30:05.744407 8286 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0910 17:30:05.744543 8286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0910 17:30:05.762565 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0910 17:30:05.773067 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0910 17:30:05.784081 8286 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0910 17:30:05.784200 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0910 17:30:05.794413 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0910 17:30:05.804583 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0910 17:30:05.815047 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0910 17:30:05.825264 8286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0910 17:30:05.835358 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0910 17:30:05.845249 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0910 17:30:05.855798 8286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0910 17:30:05.866094 8286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0910 17:30:05.875163 8286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0910 17:30:05.884069 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:05.974596 8286 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0910 17:30:06.109413 8286 start.go:495] detecting cgroup driver to use...
I0910 17:30:06.109502 8286 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0910 17:30:06.109577 8286 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0910 17:30:06.128194 8286 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0910 17:30:06.128342 8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0910 17:30:06.148130 8286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0910 17:30:06.167756 8286 ssh_runner.go:195] Run: which cri-dockerd
I0910 17:30:06.172165 8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0910 17:30:06.182673 8286 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0910 17:30:06.207372 8286 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0910 17:30:06.317689 8286 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0910 17:30:06.423925 8286 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0910 17:30:06.424096 8286 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0910 17:30:06.446374 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:06.549184 8286 ssh_runner.go:195] Run: sudo systemctl restart docker
I0910 17:30:06.825524 8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0910 17:30:06.838451 8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0910 17:30:06.850755 8286 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0910 17:30:06.944051 8286 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0910 17:30:07.039558 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:07.148087 8286 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0910 17:30:07.163173 8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0910 17:30:07.174179 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:07.265255 8286 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0910 17:30:07.334640 8286 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0910 17:30:07.334785 8286 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0910 17:30:07.339124 8286 start.go:563] Will wait 60s for crictl version
I0910 17:30:07.339239 8286 ssh_runner.go:195] Run: which crictl
I0910 17:30:07.344173 8286 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0910 17:30:07.381511 8286 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0910 17:30:07.381653 8286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0910 17:30:07.403348 8286 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0910 17:30:07.428976 8286 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.1 ...
I0910 17:30:07.429107 8286 cli_runner.go:164] Run: docker network inspect addons-018527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0910 17:30:07.447428 8286 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0910 17:30:07.451273 8286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 17:30:07.462516 8286 kubeadm.go:883] updating cluster {Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0910 17:30:07.462633 8286 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0910 17:30:07.462695 8286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0910 17:30:07.483779 8286 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0910 17:30:07.483813 8286 docker.go:615] Images already preloaded, skipping extraction
I0910 17:30:07.483893 8286 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0910 17:30:07.505133 8286 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0910 17:30:07.505157 8286 cache_images.go:84] Images are preloaded, skipping loading
I0910 17:30:07.505177 8286 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
I0910 17:30:07.505275 8286 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-018527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0910 17:30:07.505357 8286 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0910 17:30:07.555821 8286 cni.go:84] Creating CNI manager for ""
I0910 17:30:07.555855 8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0910 17:30:07.555870 8286 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0910 17:30:07.555892 8286 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-018527 NodeName:addons-018527 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0910 17:30:07.556038 8286 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-018527"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0910 17:30:07.556108 8286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0910 17:30:07.565774 8286 binaries.go:44] Found k8s binaries, skipping transfer
I0910 17:30:07.565854 8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0910 17:30:07.574753 8286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0910 17:30:07.594236 8286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0910 17:30:07.613348 8286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0910 17:30:07.632958 8286 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0910 17:30:07.636561 8286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0910 17:30:07.647949 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:07.742247 8286 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0910 17:30:07.758325 8286 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527 for IP: 192.168.49.2
I0910 17:30:07.758397 8286 certs.go:194] generating shared ca certs ...
I0910 17:30:07.758413 8286 certs.go:226] acquiring lock for ca certs: {Name:mk064211dcef1159c3fefad646daeaa676bc22b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:07.758528 8286 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key
I0910 17:30:08.271435 8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt ...
I0910 17:30:08.271488 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt: {Name:mk525f91ee991e7af186c1aa3251b98eaa768bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.271702 8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key ...
I0910 17:30:08.271717 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key: {Name:mk359588b98040abe8d71cb1dff488dcd56fc6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.271829 8286 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key
I0910 17:30:08.463432 8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt ...
I0910 17:30:08.463460 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt: {Name:mk9ac8f7ff34ab23843e3e0a509474eaad42eace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.463632 8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key ...
I0910 17:30:08.463645 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key: {Name:mkd28532052f6a8c196373722115cad6e3e4473d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.463726 8286 certs.go:256] generating profile certs ...
I0910 17:30:08.463786 8286 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key
I0910 17:30:08.463806 8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt with IP's: []
I0910 17:30:08.720829 8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt ...
I0910 17:30:08.720865 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.crt: {Name:mk57a965600b99b73f1d5b2cb45135fcd8e23e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.721094 8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key ...
I0910 17:30:08.721107 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/client.key: {Name:mk2e6f1f97b44a486bba64c702c9a6809c6a0657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:08.721203 8286 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372
I0910 17:30:08.721220 8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0910 17:30:09.070554 8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 ...
I0910 17:30:09.070586 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372: {Name:mkb032b2cda693551797449aa0f56c82cb539253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:09.070870 8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372 ...
I0910 17:30:09.070889 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372: {Name:mk6e82363d379138605c66d45a05727ea1246f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:09.071025 8286 certs.go:381] copying /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt.201cb372 -> /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt
I0910 17:30:09.071135 8286 certs.go:385] copying /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key.201cb372 -> /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key
I0910 17:30:09.071211 8286 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key
I0910 17:30:09.071232 8286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt with IP's: []
I0910 17:30:09.980793 8286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt ...
I0910 17:30:09.980827 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt: {Name:mkff6197ab2e7bf1e631f06a88f092437d386b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:09.981003 8286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key ...
I0910 17:30:09.981017 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key: {Name:mk814ac16fbd11f1d16abc8fd73241fd8297b6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:09.981211 8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca-key.pem (1675 bytes)
I0910 17:30:09.981252 8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/ca.pem (1082 bytes)
I0910 17:30:09.981278 8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/cert.pem (1123 bytes)
I0910 17:30:09.981306 8286 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-2209/.minikube/certs/key.pem (1679 bytes)
I0910 17:30:09.981888 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0910 17:30:10.045347 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0910 17:30:10.105060 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0910 17:30:10.139510 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0910 17:30:10.168686 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0910 17:30:10.194203 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0910 17:30:10.219741 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0910 17:30:10.247357 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/profiles/addons-018527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0910 17:30:10.272068 8286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-2209/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0910 17:30:10.297357 8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0910 17:30:10.316193 8286 ssh_runner.go:195] Run: openssl version
I0910 17:30:10.321799 8286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0910 17:30:10.331866 8286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0910 17:30:10.335965 8286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
I0910 17:30:10.336107 8286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0910 17:30:10.343311 8286 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0910 17:30:10.354041 8286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0910 17:30:10.357832 8286 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0910 17:30:10.357890 8286 kubeadm.go:392] StartCluster: {Name:addons-018527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-018527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0910 17:30:10.358022 8286 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0910 17:30:10.375222 8286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0910 17:30:10.384733 8286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0910 17:30:10.394296 8286 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0910 17:30:10.394480 8286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0910 17:30:10.404281 8286 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0910 17:30:10.404302 8286 kubeadm.go:157] found existing configuration files:
I0910 17:30:10.404385 8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0910 17:30:10.413440 8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0910 17:30:10.413555 8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0910 17:30:10.422367 8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0910 17:30:10.431616 8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0910 17:30:10.431728 8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0910 17:30:10.440313 8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0910 17:30:10.450109 8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0910 17:30:10.450226 8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0910 17:30:10.459892 8286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0910 17:30:10.470046 8286 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0910 17:30:10.470119 8286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0910 17:30:10.479420 8286 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0910 17:30:10.521839 8286 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0910 17:30:10.521937 8286 kubeadm.go:310] [preflight] Running pre-flight checks
I0910 17:30:10.551722 8286 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0910 17:30:10.551804 8286 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1068-aws[0m
I0910 17:30:10.551844 8286 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0910 17:30:10.551898 8286 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0910 17:30:10.551963 8286 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0910 17:30:10.552024 8286 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0910 17:30:10.552090 8286 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0910 17:30:10.552141 8286 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0910 17:30:10.552221 8286 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0910 17:30:10.552278 8286 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0910 17:30:10.552345 8286 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0910 17:30:10.552405 8286 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0910 17:30:10.618404 8286 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0910 17:30:10.618557 8286 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0910 17:30:10.618677 8286 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0910 17:30:10.633265 8286 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0910 17:30:10.638497 8286 out.go:235] - Generating certificates and keys ...
I0910 17:30:10.638734 8286 kubeadm.go:310] [certs] Using existing ca certificate authority
I0910 17:30:10.638847 8286 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0910 17:30:10.797110 8286 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0910 17:30:11.454946 8286 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0910 17:30:11.768421 8286 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0910 17:30:12.163676 8286 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0910 17:30:12.588391 8286 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0910 17:30:12.588696 8286 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-018527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0910 17:30:13.155633 8286 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0910 17:30:13.155968 8286 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-018527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0910 17:30:13.872257 8286 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0910 17:30:14.348119 8286 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0910 17:30:14.634213 8286 kubeadm.go:310] [certs] Generating "sa" key and public key
I0910 17:30:14.634603 8286 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0910 17:30:14.951222 8286 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0910 17:30:15.170258 8286 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0910 17:30:15.792783 8286 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0910 17:30:16.167077 8286 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0910 17:30:17.458837 8286 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0910 17:30:17.459658 8286 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0910 17:30:17.463833 8286 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0910 17:30:17.466533 8286 out.go:235] - Booting up control plane ...
I0910 17:30:17.466650 8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0910 17:30:17.467107 8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0910 17:30:17.468386 8286 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0910 17:30:17.486095 8286 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0910 17:30:17.492293 8286 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0910 17:30:17.492353 8286 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0910 17:30:17.604729 8286 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0910 17:30:17.604861 8286 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0910 17:30:19.105938 8286 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501562351s
I0910 17:30:19.106031 8286 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0910 17:30:25.108247 8286 kubeadm.go:310] [api-check] The API server is healthy after 6.002261459s
I0910 17:30:25.128734 8286 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0910 17:30:25.144590 8286 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0910 17:30:25.183141 8286 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0910 17:30:25.183334 8286 kubeadm.go:310] [mark-control-plane] Marking the node addons-018527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0910 17:30:25.196383 8286 kubeadm.go:310] [bootstrap-token] Using token: ni4uj8.svsil8e4x0j42lib
I0910 17:30:25.198223 8286 out.go:235] - Configuring RBAC rules ...
I0910 17:30:25.198386 8286 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0910 17:30:25.205277 8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0910 17:30:25.216568 8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0910 17:30:25.224174 8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0910 17:30:25.228949 8286 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0910 17:30:25.233296 8286 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0910 17:30:25.517177 8286 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0910 17:30:25.942944 8286 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0910 17:30:26.515999 8286 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0910 17:30:26.517302 8286 kubeadm.go:310]
I0910 17:30:26.517375 8286 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0910 17:30:26.517388 8286 kubeadm.go:310]
I0910 17:30:26.517468 8286 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0910 17:30:26.517477 8286 kubeadm.go:310]
I0910 17:30:26.517503 8286 kubeadm.go:310] mkdir -p $HOME/.kube
I0910 17:30:26.517563 8286 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0910 17:30:26.517616 8286 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0910 17:30:26.517625 8286 kubeadm.go:310]
I0910 17:30:26.517677 8286 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0910 17:30:26.517685 8286 kubeadm.go:310]
I0910 17:30:26.517731 8286 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0910 17:30:26.517739 8286 kubeadm.go:310]
I0910 17:30:26.517790 8286 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0910 17:30:26.517865 8286 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0910 17:30:26.517935 8286 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0910 17:30:26.517943 8286 kubeadm.go:310]
I0910 17:30:26.518024 8286 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0910 17:30:26.518102 8286 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0910 17:30:26.518111 8286 kubeadm.go:310]
I0910 17:30:26.518192 8286 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ni4uj8.svsil8e4x0j42lib \
I0910 17:30:26.518294 8286 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:2949b2a2dda6376e1bb92d867ada754fab30b7a6343fd8388bdd9e6344c68eb2 \
I0910 17:30:26.518318 8286 kubeadm.go:310] --control-plane
I0910 17:30:26.518322 8286 kubeadm.go:310]
I0910 17:30:26.518436 8286 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0910 17:30:26.518446 8286 kubeadm.go:310]
I0910 17:30:26.518525 8286 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ni4uj8.svsil8e4x0j42lib \
I0910 17:30:26.518626 8286 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:2949b2a2dda6376e1bb92d867ada754fab30b7a6343fd8388bdd9e6344c68eb2
I0910 17:30:26.521134 8286 kubeadm.go:310] W0910 17:30:10.518238 1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0910 17:30:26.521495 8286 kubeadm.go:310] W0910 17:30:10.519161 1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0910 17:30:26.521743 8286 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
I0910 17:30:26.521882 8286 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0910 17:30:26.521910 8286 cni.go:84] Creating CNI manager for ""
I0910 17:30:26.521925 8286 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0910 17:30:26.525757 8286 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0910 17:30:26.528094 8286 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0910 17:30:26.537813 8286 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0910 17:30:26.558065 8286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0910 17:30:26.558187 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:26.558273 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-018527 minikube.k8s.io/updated_at=2024_09_10T17_30_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-018527 minikube.k8s.io/primary=true
I0910 17:30:26.705998 8286 ops.go:34] apiserver oom_adj: -16
I0910 17:30:26.706102 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:27.206734 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:27.706398 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:28.206269 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:28.706633 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:29.206805 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:29.707183 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:30.207304 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:30.706382 8286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0910 17:30:30.808584 8286 kubeadm.go:1113] duration metric: took 4.250436701s to wait for elevateKubeSystemPrivileges
I0910 17:30:30.808611 8286 kubeadm.go:394] duration metric: took 20.45072418s to StartCluster
I0910 17:30:30.808627 8286 settings.go:142] acquiring lock: {Name:mk08d9d8b25bc27f9f84ae0f54ae1e531fa50eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:30.808734 8286 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19598-2209/kubeconfig
I0910 17:30:30.809142 8286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-2209/kubeconfig: {Name:mk6dfa0cdc9dcc6fca3c984f41ed79b7f8cca436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0910 17:30:30.809320 8286 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0910 17:30:30.809411 8286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0910 17:30:30.809666 8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:30:30.809697 8286 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0910 17:30:30.809778 8286 addons.go:69] Setting yakd=true in profile "addons-018527"
I0910 17:30:30.809802 8286 addons.go:234] Setting addon yakd=true in "addons-018527"
I0910 17:30:30.809826 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.810298 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.810467 8286 addons.go:69] Setting inspektor-gadget=true in profile "addons-018527"
I0910 17:30:30.810495 8286 addons.go:234] Setting addon inspektor-gadget=true in "addons-018527"
I0910 17:30:30.810517 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.810885 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.811390 8286 addons.go:69] Setting cloud-spanner=true in profile "addons-018527"
I0910 17:30:30.811425 8286 addons.go:234] Setting addon cloud-spanner=true in "addons-018527"
I0910 17:30:30.811449 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.811830 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815129 8286 addons.go:69] Setting metrics-server=true in profile "addons-018527"
I0910 17:30:30.815180 8286 addons.go:69] Setting gcp-auth=true in profile "addons-018527"
I0910 17:30:30.815226 8286 addons.go:234] Setting addon metrics-server=true in "addons-018527"
I0910 17:30:30.815276 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.815723 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.816337 8286 addons.go:69] Setting volcano=true in profile "addons-018527"
I0910 17:30:30.816382 8286 addons.go:234] Setting addon volcano=true in "addons-018527"
I0910 17:30:30.816412 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.816842 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.820736 8286 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-018527"
I0910 17:30:30.820789 8286 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-018527"
I0910 17:30:30.820825 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.821260 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815276 8286 out.go:177] * Verifying Kubernetes components...
I0910 17:30:30.815174 8286 addons.go:69] Setting default-storageclass=true in profile "addons-018527"
I0910 17:30:30.838481 8286 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-018527"
I0910 17:30:30.838815 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.840228 8286 addons.go:69] Setting registry=true in profile "addons-018527"
I0910 17:30:30.840275 8286 addons.go:234] Setting addon registry=true in "addons-018527"
I0910 17:30:30.840313 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.843898 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815236 8286 addons.go:69] Setting ingress=true in profile "addons-018527"
I0910 17:30:30.854430 8286 addons.go:234] Setting addon ingress=true in "addons-018527"
I0910 17:30:30.854479 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.855106 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.857295 8286 addons.go:69] Setting storage-provisioner=true in profile "addons-018527"
I0910 17:30:30.857354 8286 addons.go:234] Setting addon storage-provisioner=true in "addons-018527"
I0910 17:30:30.857389 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.857975 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815241 8286 addons.go:69] Setting ingress-dns=true in profile "addons-018527"
I0910 17:30:30.871956 8286 addons.go:234] Setting addon ingress-dns=true in "addons-018527"
I0910 17:30:30.872007 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.872048 8286 addons.go:69] Setting volumesnapshots=true in profile "addons-018527"
I0910 17:30:30.872080 8286 addons.go:234] Setting addon volumesnapshots=true in "addons-018527"
I0910 17:30:30.872097 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.872529 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.884537 8286 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-018527"
I0910 17:30:30.884578 8286 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-018527"
I0910 17:30:30.884904 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815166 8286 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-018527"
I0910 17:30:30.901093 8286 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-018527"
I0910 17:30:30.901133 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:30.901631 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.815231 8286 mustload.go:65] Loading cluster: addons-018527
I0910 17:30:30.930606 8286 config.go:182] Loaded profile config "addons-018527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:30:30.930883 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.979718 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:30.986921 8286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0910 17:30:31.056656 8286 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0910 17:30:31.069429 8286 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0910 17:30:31.069555 8286 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0910 17:30:31.069567 8286 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0910 17:30:31.069655 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.072991 8286 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0910 17:30:31.073021 8286 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0910 17:30:31.073114 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.080495 8286 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0910 17:30:31.080655 8286 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0910 17:30:31.082470 8286 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0910 17:30:31.090912 8286 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0910 17:30:31.091087 8286 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0910 17:30:31.091125 8286 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0910 17:30:31.091187 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.095439 8286 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0910 17:30:31.095480 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0910 17:30:31.095861 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.099219 8286 out.go:177] - Using image docker.io/registry:2.8.3
I0910 17:30:31.107130 8286 addons.go:234] Setting addon default-storageclass=true in "addons-018527"
I0910 17:30:31.107181 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:31.107608 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:31.126305 8286 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0910 17:30:31.126358 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0910 17:30:31.126424 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.126734 8286 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0910 17:30:31.128864 8286 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0910 17:30:31.135101 8286 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0910 17:30:31.135154 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0910 17:30:31.135229 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.153172 8286 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0910 17:30:31.153298 8286 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0910 17:30:31.155191 8286 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0910 17:30:31.155215 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0910 17:30:31.155289 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.160480 8286 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0910 17:30:31.160523 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0910 17:30:31.160615 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.170426 8286 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-018527"
I0910 17:30:31.170469 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:31.170876 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:31.177442 8286 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0910 17:30:31.208610 8286 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0910 17:30:31.210461 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0910 17:30:31.210486 8286 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0910 17:30:31.210562 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.214986 8286 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0910 17:30:31.229353 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0910 17:30:31.246526 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0910 17:30:31.249633 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:31.251383 8286 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0910 17:30:31.278668 8286 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0910 17:30:31.278732 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0910 17:30:31.278809 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.278977 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0910 17:30:31.281140 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0910 17:30:31.284583 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0910 17:30:31.290816 8286 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0910 17:30:31.315296 8286 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0910 17:30:31.325310 8286 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0910 17:30:31.328097 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0910 17:30:31.328209 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0910 17:30:31.328331 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.345213 8286 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0910 17:30:31.350055 8286 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0910 17:30:31.350077 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0910 17:30:31.350152 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.363374 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.373019 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.398227 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.400711 8286 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0910 17:30:31.400730 8286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0910 17:30:31.400801 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.438198 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.443336 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.453627 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.482660 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.492112 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.507550 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.507952 8286 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0910 17:30:31.514513 8286 out.go:177] - Using image docker.io/busybox:stable
I0910 17:30:31.520315 8286 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0910 17:30:31.520337 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0910 17:30:31.520409 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:31.532530 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.543410 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
W0910 17:30:31.546633 8286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0910 17:30:31.546663 8286 retry.go:31] will retry after 143.140377ms: ssh: handshake failed: EOF
I0910 17:30:31.557900 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.560598 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:31.580201 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
W0910 17:30:31.581861 8286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0910 17:30:31.581884 8286 retry.go:31] will retry after 281.737112ms: ssh: handshake failed: EOF
I0910 17:30:31.646553 8286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0910 17:30:31.646673 8286 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0910 17:30:32.093359 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0910 17:30:32.187619 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0910 17:30:32.192519 8286 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0910 17:30:32.192583 8286 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0910 17:30:32.231019 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0910 17:30:32.278311 8286 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0910 17:30:32.278379 8286 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0910 17:30:32.315419 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0910 17:30:32.435283 8286 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0910 17:30:32.435312 8286 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0910 17:30:32.474169 8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0910 17:30:32.474197 8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0910 17:30:32.533873 8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0910 17:30:32.533900 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0910 17:30:32.548914 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0910 17:30:32.672889 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0910 17:30:32.672917 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0910 17:30:32.775066 8286 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0910 17:30:32.775092 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0910 17:30:32.795219 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0910 17:30:32.895535 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0910 17:30:32.939449 8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0910 17:30:32.939493 8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0910 17:30:32.958445 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0910 17:30:32.964923 8286 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0910 17:30:32.964959 8286 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0910 17:30:33.025393 8286 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0910 17:30:33.025422 8286 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0910 17:30:33.076387 8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0910 17:30:33.076416 8286 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0910 17:30:33.095722 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0910 17:30:33.095750 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0910 17:30:33.126441 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0910 17:30:33.143634 8286 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0910 17:30:33.143677 8286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0910 17:30:33.194838 8286 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0910 17:30:33.194875 8286 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0910 17:30:33.241336 8286 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0910 17:30:33.241365 8286 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0910 17:30:33.284412 8286 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0910 17:30:33.284461 8286 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0910 17:30:33.313149 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0910 17:30:33.313189 8286 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0910 17:30:33.351940 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0910 17:30:33.351968 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0910 17:30:33.474707 8286 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0910 17:30:33.474734 8286 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0910 17:30:33.516882 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0910 17:30:33.520092 8286 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0910 17:30:33.520117 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0910 17:30:33.539187 8286 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0910 17:30:33.539215 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0910 17:30:33.620495 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0910 17:30:33.620524 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0910 17:30:33.764929 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0910 17:30:33.821062 8286 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0910 17:30:33.821088 8286 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0910 17:30:33.865660 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0910 17:30:33.996187 8286 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0910 17:30:33.996229 8286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0910 17:30:34.038884 8286 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0910 17:30:34.038913 8286 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0910 17:30:34.070688 8286 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.423991358s)
I0910 17:30:34.070819 8286 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.424240908s)
I0910 17:30:34.070840 8286 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0910 17:30:34.072922 8286 node_ready.go:35] waiting up to 6m0s for node "addons-018527" to be "Ready" ...
I0910 17:30:34.082425 8286 node_ready.go:49] node "addons-018527" has status "Ready":"True"
I0910 17:30:34.082465 8286 node_ready.go:38] duration metric: took 9.385923ms for node "addons-018527" to be "Ready" ...
I0910 17:30:34.082476 8286 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0910 17:30:34.116142 8286 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace to be "Ready" ...
I0910 17:30:34.335095 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0910 17:30:34.335172 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0910 17:30:34.359243 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0910 17:30:34.359308 8286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0910 17:30:34.477284 8286 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0910 17:30:34.477360 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0910 17:30:34.565564 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0910 17:30:34.565636 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0910 17:30:34.575079 8286 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-018527" context rescaled to 1 replicas
I0910 17:30:34.683409 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0910 17:30:35.059234 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0910 17:30:35.059322 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0910 17:30:35.583329 8286 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0910 17:30:35.583374 8286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0910 17:30:35.632798 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.539370875s)
I0910 17:30:35.632868 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.445027959s)
I0910 17:30:36.098826 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0910 17:30:36.149444 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:38.259139 8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0910 17:30:38.259255 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:38.288300 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:38.661649 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:39.246317 8286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0910 17:30:39.760533 8286 addons.go:234] Setting addon gcp-auth=true in "addons-018527"
I0910 17:30:39.760600 8286 host.go:66] Checking if "addons-018527" exists ...
I0910 17:30:39.761164 8286 cli_runner.go:164] Run: docker container inspect addons-018527 --format={{.State.Status}}
I0910 17:30:39.785464 8286 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0910 17:30:39.785523 8286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-018527
I0910 17:30:39.818425 8286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19598-2209/.minikube/machines/addons-018527/id_rsa Username:docker}
I0910 17:30:41.123196 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:43.128156 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:44.448672 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.217579819s)
I0910 17:30:44.448737 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.133294701s)
I0910 17:30:44.448780 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.899842557s)
I0910 17:30:44.448919 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.653677065s)
I0910 17:30:44.449043 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.55348521s)
I0910 17:30:44.449075 8286 addons.go:475] Verifying addon ingress=true in "addons-018527"
I0910 17:30:44.449313 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.490842863s)
I0910 17:30:44.449547 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.323076464s)
I0910 17:30:44.449561 8286 addons.go:475] Verifying addon registry=true in "addons-018527"
I0910 17:30:44.449901 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.932960065s)
I0910 17:30:44.449921 8286 addons.go:475] Verifying addon metrics-server=true in "addons-018527"
I0910 17:30:44.450004 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.685047245s)
W0910 17:30:44.450022 8286 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0910 17:30:44.450037 8286 retry.go:31] will retry after 364.686143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0910 17:30:44.450075 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.584388512s)
I0910 17:30:44.450180 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.76667864s)
I0910 17:30:44.453780 8286 out.go:177] * Verifying registry addon...
I0910 17:30:44.454959 8286 out.go:177] * Verifying ingress addon...
I0910 17:30:44.457015 8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0910 17:30:44.457244 8286 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-018527 service yakd-dashboard -n yakd-dashboard
I0910 17:30:44.458159 8286 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0910 17:30:44.526069 8286 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0910 17:30:44.526095 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:44.530963 8286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0910 17:30:44.531047 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:44.815144 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0910 17:30:44.966090 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:44.967133 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:45.173700 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:45.471278 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:45.471481 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:45.577219 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.478332779s)
I0910 17:30:45.577249 8286 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-018527"
I0910 17:30:45.577499 8286 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.792006726s)
I0910 17:30:45.580172 8286 out.go:177] * Verifying csi-hostpath-driver addon...
I0910 17:30:45.580306 8286 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0910 17:30:45.583167 8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0910 17:30:45.585542 8286 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0910 17:30:45.587622 8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0910 17:30:45.587696 8286 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0910 17:30:45.592973 8286 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0910 17:30:45.592997 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:45.696065 8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0910 17:30:45.696140 8286 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0910 17:30:45.744261 8286 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0910 17:30:45.744331 8286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0910 17:30:45.812237 8286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0910 17:30:45.965723 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:45.967001 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:46.122864 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:46.463742 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:46.464657 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:46.589474 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:46.964579 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:46.965420 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:47.088821 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:47.307816 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492625167s)
I0910 17:30:47.360672 8286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.548399838s)
I0910 17:30:47.363475 8286 addons.go:475] Verifying addon gcp-auth=true in "addons-018527"
I0910 17:30:47.366126 8286 out.go:177] * Verifying gcp-auth addon...
I0910 17:30:47.369344 8286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0910 17:30:47.373208 8286 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0910 17:30:47.476081 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:47.478322 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:47.589733 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:47.622918 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:47.963519 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:47.964906 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:48.089272 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:48.464213 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:48.464718 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:48.587589 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:48.961174 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:48.963704 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:49.088367 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:49.462047 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:49.462364 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:49.589410 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:49.623554 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:49.974998 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:49.976020 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:50.090688 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:50.460880 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:50.462670 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:50.589197 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:50.961839 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:50.975671 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:51.089882 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:51.463394 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:51.464821 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:51.588727 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:51.961244 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:51.963304 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:52.087693 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:52.122936 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:52.462481 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:52.464225 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:52.588996 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:52.961126 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:52.963647 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:53.088029 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:53.460579 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:53.463011 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:53.589522 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:53.962395 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:53.963354 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:54.089295 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:54.123750 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:54.461918 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:54.463530 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:54.587963 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:54.962401 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:54.963498 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:55.096437 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:55.462635 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:55.463036 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:55.588690 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:55.963096 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:55.964995 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:56.091545 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:56.475505 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:56.476636 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:56.588720 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:56.623307 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:56.961425 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:56.962853 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:57.089200 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:57.461351 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:57.463324 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:57.587895 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:57.963472 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:57.964436 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:58.088552 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:58.462881 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:58.463439 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:58.587897 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:58.961809 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:58.964988 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:59.088449 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:59.122246 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:30:59.464786 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:30:59.466700 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:59.588534 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:30:59.961289 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:30:59.963835 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:00.125761 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:00.497784 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:00.498969 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:00.612237 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:00.961698 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:00.965063 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:01.088921 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:01.122578 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:31:01.462278 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:01.463531 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:01.588038 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:01.962502 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:01.963449 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:02.089079 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:02.477081 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:02.478240 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:02.588951 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:02.963627 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:02.964240 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:03.089235 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:03.123567 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:31:03.476244 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:03.477100 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:03.587949 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:03.964768 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:03.965324 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:04.092750 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:04.466532 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:04.468112 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:04.587781 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:04.966446 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:04.968174 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:05.089943 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:05.126548 8286 pod_ready.go:103] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"False"
I0910 17:31:05.475114 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:05.476270 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:05.588496 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:05.964570 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:05.965956 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:06.088495 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:06.124110 8286 pod_ready.go:93] pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.124184 8286 pod_ready.go:82] duration metric: took 32.007965274s for pod "coredns-6f6b679f8f-sdtps" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.124212 8286 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.126893 8286 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-zrdzw" not found
I0910 17:31:06.126965 8286 pod_ready.go:82] duration metric: took 2.731957ms for pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace to be "Ready" ...
E0910 17:31:06.126990 8286 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-zrdzw" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-zrdzw" not found
I0910 17:31:06.127011 8286 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.134048 8286 pod_ready.go:93] pod "etcd-addons-018527" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.134121 8286 pod_ready.go:82] duration metric: took 7.076185ms for pod "etcd-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.134162 8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.142151 8286 pod_ready.go:93] pod "kube-apiserver-addons-018527" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.142224 8286 pod_ready.go:82] duration metric: took 8.035346ms for pod "kube-apiserver-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.142251 8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.149325 8286 pod_ready.go:93] pod "kube-controller-manager-addons-018527" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.149397 8286 pod_ready.go:82] duration metric: took 7.123462ms for pod "kube-controller-manager-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.149424 8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdjgm" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.320729 8286 pod_ready.go:93] pod "kube-proxy-xdjgm" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.320754 8286 pod_ready.go:82] duration metric: took 171.309068ms for pod "kube-proxy-xdjgm" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.320768 8286 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.463428 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:06.465596 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:06.588811 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:06.720162 8286 pod_ready.go:93] pod "kube-scheduler-addons-018527" in "kube-system" namespace has status "Ready":"True"
I0910 17:31:06.720190 8286 pod_ready.go:82] duration metric: took 399.414174ms for pod "kube-scheduler-addons-018527" in "kube-system" namespace to be "Ready" ...
I0910 17:31:06.720201 8286 pod_ready.go:39] duration metric: took 32.637713929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0910 17:31:06.720219 8286 api_server.go:52] waiting for apiserver process to appear ...
I0910 17:31:06.720303 8286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0910 17:31:06.737642 8286 api_server.go:72] duration metric: took 35.928295345s to wait for apiserver process to appear ...
I0910 17:31:06.737707 8286 api_server.go:88] waiting for apiserver healthz status ...
I0910 17:31:06.737739 8286 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0910 17:31:06.745435 8286 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0910 17:31:06.746720 8286 api_server.go:141] control plane version: v1.31.0
I0910 17:31:06.746760 8286 api_server.go:131] duration metric: took 9.033185ms to wait for apiserver health ...
I0910 17:31:06.746770 8286 system_pods.go:43] waiting for kube-system pods to appear ...
I0910 17:31:06.929627 8286 system_pods.go:59] 17 kube-system pods found
I0910 17:31:06.929664 8286 system_pods.go:61] "coredns-6f6b679f8f-sdtps" [583b5997-bafc-4b57-aa34-d00095de4aed] Running
I0910 17:31:06.929676 8286 system_pods.go:61] "csi-hostpath-attacher-0" [b45ababd-630f-4f31-b7c7-7fd839c504cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0910 17:31:06.929685 8286 system_pods.go:61] "csi-hostpath-resizer-0" [184c911a-dd86-4ff9-9655-d1ffd869d1dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0910 17:31:06.929693 8286 system_pods.go:61] "csi-hostpathplugin-mvsrq" [55ab278e-003b-4eb9-9120-9068d57eef7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0910 17:31:06.929699 8286 system_pods.go:61] "etcd-addons-018527" [8d8ee3e5-02ae-446e-a059-3ae8eb68c5ba] Running
I0910 17:31:06.929704 8286 system_pods.go:61] "kube-apiserver-addons-018527" [a56e3878-9412-4e3a-b75f-289231338059] Running
I0910 17:31:06.929708 8286 system_pods.go:61] "kube-controller-manager-addons-018527" [f6e7221b-6c18-49d0-8a91-b41b70e5b6fc] Running
I0910 17:31:06.929718 8286 system_pods.go:61] "kube-ingress-dns-minikube" [d807dd65-94ff-458f-90b4-26a6a55d5921] Running
I0910 17:31:06.929722 8286 system_pods.go:61] "kube-proxy-xdjgm" [f303e3f2-d196-448d-ac3a-965a45fc9253] Running
I0910 17:31:06.929732 8286 system_pods.go:61] "kube-scheduler-addons-018527" [a8cc5199-4392-4108-9e86-e2e08078002b] Running
I0910 17:31:06.929739 8286 system_pods.go:61] "metrics-server-84c5f94fbc-m4w8v" [c99d7e90-85cb-445e-9f15-c2a13cc75a7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0910 17:31:06.929755 8286 system_pods.go:61] "nvidia-device-plugin-daemonset-nzqkz" [8e4852cb-f95d-48ef-a74c-8da89946c2d5] Running
I0910 17:31:06.929776 8286 system_pods.go:61] "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
I0910 17:31:06.929783 8286 system_pods.go:61] "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0910 17:31:06.929790 8286 system_pods.go:61] "snapshot-controller-56fcc65765-bdvsv" [74ef0080-01a0-4ef4-9976-b7e370436ce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0910 17:31:06.929797 8286 system_pods.go:61] "snapshot-controller-56fcc65765-w5wvl" [24868c11-9966-48f6-9256-9a010dfd0cec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0910 17:31:06.929803 8286 system_pods.go:61] "storage-provisioner" [62081ed1-b8d0-41d3-b12b-49d7ae204d60] Running
I0910 17:31:06.929819 8286 system_pods.go:74] duration metric: took 183.042342ms to wait for pod list to return data ...
I0910 17:31:06.929833 8286 default_sa.go:34] waiting for default service account to be created ...
I0910 17:31:06.960879 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:06.962767 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:07.088128 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:07.120491 8286 default_sa.go:45] found service account: "default"
I0910 17:31:07.120518 8286 default_sa.go:55] duration metric: took 190.677803ms for default service account to be created ...
I0910 17:31:07.120528 8286 system_pods.go:116] waiting for k8s-apps to be running ...
I0910 17:31:07.327521 8286 system_pods.go:86] 17 kube-system pods found
I0910 17:31:07.327556 8286 system_pods.go:89] "coredns-6f6b679f8f-sdtps" [583b5997-bafc-4b57-aa34-d00095de4aed] Running
I0910 17:31:07.327566 8286 system_pods.go:89] "csi-hostpath-attacher-0" [b45ababd-630f-4f31-b7c7-7fd839c504cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0910 17:31:07.327583 8286 system_pods.go:89] "csi-hostpath-resizer-0" [184c911a-dd86-4ff9-9655-d1ffd869d1dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0910 17:31:07.327594 8286 system_pods.go:89] "csi-hostpathplugin-mvsrq" [55ab278e-003b-4eb9-9120-9068d57eef7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0910 17:31:07.327602 8286 system_pods.go:89] "etcd-addons-018527" [8d8ee3e5-02ae-446e-a059-3ae8eb68c5ba] Running
I0910 17:31:07.327608 8286 system_pods.go:89] "kube-apiserver-addons-018527" [a56e3878-9412-4e3a-b75f-289231338059] Running
I0910 17:31:07.327616 8286 system_pods.go:89] "kube-controller-manager-addons-018527" [f6e7221b-6c18-49d0-8a91-b41b70e5b6fc] Running
I0910 17:31:07.327621 8286 system_pods.go:89] "kube-ingress-dns-minikube" [d807dd65-94ff-458f-90b4-26a6a55d5921] Running
I0910 17:31:07.327626 8286 system_pods.go:89] "kube-proxy-xdjgm" [f303e3f2-d196-448d-ac3a-965a45fc9253] Running
I0910 17:31:07.327633 8286 system_pods.go:89] "kube-scheduler-addons-018527" [a8cc5199-4392-4108-9e86-e2e08078002b] Running
I0910 17:31:07.327639 8286 system_pods.go:89] "metrics-server-84c5f94fbc-m4w8v" [c99d7e90-85cb-445e-9f15-c2a13cc75a7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0910 17:31:07.327654 8286 system_pods.go:89] "nvidia-device-plugin-daemonset-nzqkz" [8e4852cb-f95d-48ef-a74c-8da89946c2d5] Running
I0910 17:31:07.327664 8286 system_pods.go:89] "registry-66c9cd494c-qdjcc" [4ac3168f-0bcd-4153-867b-4c58e4383c15] Running
I0910 17:31:07.327671 8286 system_pods.go:89] "registry-proxy-g99fs" [9ffaedc2-7aad-4454-b435-9dc17bafb9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0910 17:31:07.327677 8286 system_pods.go:89] "snapshot-controller-56fcc65765-bdvsv" [74ef0080-01a0-4ef4-9976-b7e370436ce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0910 17:31:07.327686 8286 system_pods.go:89] "snapshot-controller-56fcc65765-w5wvl" [24868c11-9966-48f6-9256-9a010dfd0cec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0910 17:31:07.327693 8286 system_pods.go:89] "storage-provisioner" [62081ed1-b8d0-41d3-b12b-49d7ae204d60] Running
I0910 17:31:07.327702 8286 system_pods.go:126] duration metric: took 207.16867ms to wait for k8s-apps to be running ...
I0910 17:31:07.327714 8286 system_svc.go:44] waiting for kubelet service to be running ....
I0910 17:31:07.327780 8286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0910 17:31:07.341761 8286 system_svc.go:56] duration metric: took 14.039295ms WaitForService to wait for kubelet
I0910 17:31:07.341800 8286 kubeadm.go:582] duration metric: took 36.53244877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0910 17:31:07.341821 8286 node_conditions.go:102] verifying NodePressure condition ...
I0910 17:31:07.464804 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:07.464994 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:07.521712 8286 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0910 17:31:07.521753 8286 node_conditions.go:123] node cpu capacity is 2
I0910 17:31:07.521768 8286 node_conditions.go:105] duration metric: took 179.94145ms to run NodePressure ...
I0910 17:31:07.521781 8286 start.go:241] waiting for startup goroutines ...
I0910 17:31:07.588536 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:07.963914 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:07.964230 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:08.090944 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:08.463660 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:08.464129 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:08.588290 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:08.964809 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:08.966268 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:09.088499 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:09.461858 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0910 17:31:09.463050 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:09.590154 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:09.966442 8286 kapi.go:107] duration metric: took 25.509421396s to wait for kubernetes.io/minikube-addons=registry ...
I0910 17:31:09.967741 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:10.090187 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:10.465605 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:10.588962 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:10.972639 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:11.090100 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:11.467671 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:11.598116 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:11.965319 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:12.095506 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:12.462701 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:12.589214 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:12.963813 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:13.092374 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:13.466824 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:13.588597 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:13.963840 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:14.090057 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:14.463548 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:14.589397 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:14.962963 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:15.102486 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:15.463567 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:15.587988 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:15.962904 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:16.088470 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:16.466621 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:16.589482 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:16.963047 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:17.088506 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:17.462916 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:17.588848 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:17.963015 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:18.090414 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:18.475950 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:18.595329 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:18.963071 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:19.100137 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:19.474903 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:19.589792 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:19.966061 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:20.088592 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:20.478523 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:20.594926 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:20.975676 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:21.089474 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:21.463671 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:21.588175 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:21.962717 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:22.088476 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:22.463069 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:22.588234 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:22.963179 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:23.095243 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:23.468350 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:23.588304 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:23.963379 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:24.090862 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:24.462027 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:24.588632 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:24.975161 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:25.091220 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:25.462936 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:25.589048 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:25.962709 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:26.089329 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:26.464327 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:26.588087 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:26.979042 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:27.091104 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:27.464383 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:27.588509 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:27.962966 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:28.088254 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:28.476333 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:28.592457 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:28.962988 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:29.088522 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:29.468170 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:29.588232 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:29.963438 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:30.094903 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:30.463637 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:30.589433 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:30.963603 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:31.088821 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:31.462423 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:31.591288 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:31.963030 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:32.088019 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:32.464346 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:32.587956 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:32.963595 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:33.097131 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:33.475595 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:33.591192 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:33.963075 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:34.089576 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:34.463070 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:34.588515 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:34.976804 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:35.090318 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:35.476383 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:35.588972 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:35.964470 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:36.089667 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:36.463906 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:36.589433 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:36.963019 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:37.089909 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:37.464084 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:37.588243 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:37.963181 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:38.114587 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:38.477194 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:38.588818 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:38.963019 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:39.088815 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:39.466151 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:39.588083 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:39.963246 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:40.089935 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:40.478644 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:40.588195 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:40.962385 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:41.088553 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:41.463627 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:41.590876 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:41.963146 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:42.089899 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:42.462909 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:42.589099 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0910 17:31:42.963313 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:43.088474 8286 kapi.go:107] duration metric: took 57.505305538s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0910 17:31:43.462385 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:43.963379 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:44.462908 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:44.963699 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:45.472960 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:45.966067 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:46.463570 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:46.975712 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:47.463499 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:47.963548 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:48.462583 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:48.964027 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:49.463610 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:49.964356 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:50.474230 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:50.963098 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:51.462822 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:51.963160 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:52.464085 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:52.975175 8286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0910 17:31:53.464771 8286 kapi.go:107] duration metric: took 1m9.006606038s to wait for app.kubernetes.io/name=ingress-nginx ...
I0910 17:32:09.382647 8286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0910 17:32:09.382669 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:09.874444 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:10.374076 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:10.872775 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:11.373263 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:11.873867 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:12.373674 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:12.873101 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:13.373141 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:13.873674 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:14.373829 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:14.872916 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:15.373119 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:15.874438 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:16.372898 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:16.872759 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:17.373477 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:17.873892 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:18.374297 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:18.876426 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:19.373754 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:19.873711 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:20.373851 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:20.872330 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:21.373470 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:21.873806 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:22.373542 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:22.872921 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:23.373143 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:23.873130 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:24.372536 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:24.873324 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:25.373428 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:25.874227 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:26.373289 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:26.873591 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:27.374039 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:27.873689 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:28.373740 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:28.872872 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:29.372699 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:29.873749 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:30.373224 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:30.874594 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:31.373345 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:31.873497 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:32.373178 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:32.873542 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:33.373984 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:33.873202 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:34.372917 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:34.873145 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:35.372896 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:35.873237 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:36.373500 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:36.872787 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:37.373476 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:37.873650 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:38.373750 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:38.873533 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:39.373558 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:39.872672 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:40.373665 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:40.873496 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:41.373930 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:41.873257 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:42.373808 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:42.872537 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:43.373439 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:43.873271 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:44.373310 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:44.873758 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:45.376331 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:45.873410 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:46.373892 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:46.873503 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:47.372747 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:47.873051 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:48.373445 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:48.873173 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:49.372463 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:49.873376 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:50.387059 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:50.873468 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:51.374023 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:51.873445 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:52.373623 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:52.874235 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:53.372953 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:53.873781 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:54.372719 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:54.873456 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:55.373820 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:55.872510 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:56.374025 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:56.874275 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:57.372942 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:57.872797 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:58.373107 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:58.874273 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:59.373343 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:32:59.873246 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:00.386589 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:00.872933 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:01.374477 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:01.873391 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:02.373771 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:02.873816 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:03.373079 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:03.873616 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:04.374298 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:04.873586 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:05.372754 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:05.873743 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:06.372847 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:06.873328 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:07.373087 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:07.873252 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:08.373114 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:08.873225 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:09.373049 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:09.872416 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:10.373471 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:10.873746 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:11.373751 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:11.872493 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:12.374139 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:12.872837 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:13.374573 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:13.873633 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:14.373604 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:14.873398 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:15.373328 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:15.873023 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:16.373219 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:16.873875 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:17.373716 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:17.874400 8286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0910 17:33:18.373781 8286 kapi.go:107] duration metric: took 2m31.004438367s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0910 17:33:18.375990 8286 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-018527 cluster.
I0910 17:33:18.378135 8286 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0910 17:33:18.380286 8286 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0910 17:33:18.383530 8286 out.go:177] * Enabled addons: ingress-dns, default-storageclass, volcano, nvidia-device-plugin, cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0910 17:33:18.386089 8286 addons.go:510] duration metric: took 2m47.576383773s for enable addons: enabled=[ingress-dns default-storageclass volcano nvidia-device-plugin cloud-spanner storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0910 17:33:18.386140 8286 start.go:246] waiting for cluster config update ...
I0910 17:33:18.386161 8286 start.go:255] writing updated cluster config ...
I0910 17:33:18.386510 8286 ssh_runner.go:195] Run: rm -f paused
I0910 17:33:18.743245 8286 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0910 17:33:18.745493 8286 out.go:177] * Done! kubectl is now configured to use "addons-018527" cluster and "default" namespace by default
==> Docker <==
Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.192195524Z" level=info msg="ignoring event" container=5f97100224f9fb78aea0cc821bc7b77e9bd12d7c55a47e61bbb1c6b3ddffe8b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.282013539Z" level=info msg="ignoring event" container=1021e2600f7559a80cb94cd9a9fd67b3e0e2ad76789d3e74487d310754f45c56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.494954183Z" level=info msg="ignoring event" container=102ab856c08ffb9f7282a4dc8eb8ec63e7f03e4739f728e2462692847ca24826 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:38 addons-018527 dockerd[1277]: time="2024-09-10T17:42:38.519098542Z" level=info msg="ignoring event" container=7c772b5f1cbaee8dcfae65ff4af5453b21125bd74eb6069188af2c3eaff22931 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:40 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:42:40Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 10 17:42:42 addons-018527 dockerd[1277]: time="2024-09-10T17:42:42.548395852Z" level=info msg="ignoring event" container=8b61e9135207071e3c8eb78d69e6c7e0ad7ce3dbc023ac2901a2dd2b28c83311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:46 addons-018527 dockerd[1277]: time="2024-09-10T17:42:46.287111939Z" level=info msg="ignoring event" container=f0e4180102f491674b2b31fce8dd7d3e509b2c47b933a26cad5e4be0a322b66d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:46 addons-018527 dockerd[1277]: time="2024-09-10T17:42:46.404939487Z" level=info msg="ignoring event" container=d3d87b82603a9ea8d3f6f5edb9b6d47378d03efa70d59e7597042ef38c7a2ad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:51 addons-018527 dockerd[1277]: time="2024-09-10T17:42:51.988495030Z" level=info msg="ignoring event" container=13553bd588541fa0615d04fd6d9eb74e53fa34890d3e1ba6a6c67937a02484ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:42:55 addons-018527 dockerd[1277]: time="2024-09-10T17:42:55.013798012Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 10 17:42:55 addons-018527 dockerd[1277]: time="2024-09-10T17:42:55.041868674Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 10 17:42:58 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:42:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b95e5012cb085e08cf25502a3a4faacd857b944536ef10f2154dbd10f5c26bcc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 10 17:43:00 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:00Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
Sep 10 17:43:08 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e74703babb18966076cd24160ddfdf272640c052c86ac897099e9220d07d7303/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 10 17:43:08 addons-018527 dockerd[1277]: time="2024-09-10T17:43:08.601188836Z" level=info msg="ignoring event" container=9b30e5a78810e1a375a6a96b28730afb7ae1e914835561443cf4346349c267cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:08 addons-018527 dockerd[1277]: time="2024-09-10T17:43:08.686688112Z" level=info msg="ignoring event" container=0d4493a49450b050fe3900dd41f1e15db01177393497fc8f82799b8b87386cb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:08 addons-018527 cri-dockerd[1535]: time="2024-09-10T17:43:08Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.078237706Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b4a43fc30d3d00b4424cf2f28d9c2189293f6ba52242815096dfa5d83311e7a1
Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.133824510Z" level=info msg="ignoring event" container=b4a43fc30d3d00b4424cf2f28d9c2189293f6ba52242815096dfa5d83311e7a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:13 addons-018527 dockerd[1277]: time="2024-09-10T17:43:13.286633991Z" level=info msg="ignoring event" container=5c41b94dcc61a7e1aa5129dd31bcd08af4b50b51aea193835be5f23aac6b32bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:15 addons-018527 dockerd[1277]: time="2024-09-10T17:43:15.479888749Z" level=info msg="ignoring event" container=50f7cf17b99f106ad07eff4d119219ed13836435f5f1129c6558a6553501ccc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.084666852Z" level=info msg="ignoring event" container=1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.210991202Z" level=info msg="ignoring event" container=fcc9398f81a125dba8d2ec3f9571af37a38e16c0e9fe162bf40e7564d804f5a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.303987162Z" level=info msg="ignoring event" container=49f4f15f4ca83135dbb5373d73fee969b523b6ac62dd8c24b880c71a27aeeb78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 10 17:43:16 addons-018527 dockerd[1277]: time="2024-09-10T17:43:16.542767130Z" level=info msg="ignoring event" container=bf114e40200f2370691dbbff7125ec43884c7d4cc069efdca30c74807cd659da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7156cc762ae99 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 9 seconds ago Running hello-world-app 0 e74703babb189 hello-world-app-55bf9c44b4-72dss
66fb7448e9e6c nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf 17 seconds ago Running nginx 0 b95e5012cb085 nginx
deaaecc75b0cf gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 e8b96bfa37aa2 gcp-auth-89d5ffd79-sxmzn
1fc45eb577618 420193b27261a 11 minutes ago Exited patch 1 95c3e47956610 ingress-nginx-admission-patch-gtv72
a0cf5e63c2f6c registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 041cfd938d363 ingress-nginx-admission-create-bctfp
c4da8fea539f6 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 2a0f46019df02 yakd-dashboard-67d98fc6b-2z6kf
f40728ed4393f rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 e0063888b72a1 local-path-provisioner-86d989889c-xqkbq
fcc9398f81a12 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 12 minutes ago Exited registry-proxy 0 bf114e40200f2 registry-proxy-g99fs
5b05093918ea6 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 c6835c0b4e909 cloud-spanner-emulator-769b77f747-5jq7w
4385c5ddb93f1 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 c5b0ddd65d8b3 nvidia-device-plugin-daemonset-nzqkz
3ae63920702c2 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 98214d0b1fe3f storage-provisioner
da7987e7a97f9 2437cf7621777 12 minutes ago Running coredns 0 25e0c9c442a06 coredns-6f6b679f8f-sdtps
039472306dbd6 71d55d66fd4ee 12 minutes ago Running kube-proxy 0 4b79eb5a3b276 kube-proxy-xdjgm
0355e6ec34842 27e3830e14027 12 minutes ago Running etcd 0 c82e990e69829 etcd-addons-018527
a4dbcd9b4921b cd0f0ae0ec9e0 12 minutes ago Running kube-apiserver 0 1e1d54198a857 kube-apiserver-addons-018527
4124297d4675f fbbbd428abb4d 12 minutes ago Running kube-scheduler 0 8428168c70e9c kube-scheduler-addons-018527
48092f20ec16c fcb0683e6bdbd 12 minutes ago Running kube-controller-manager 0 2a37be2c30848 kube-controller-manager-addons-018527
==> coredns [da7987e7a97f] <==
[INFO] 10.244.0.21:46615 - 3681 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000151827s
[INFO] 10.244.0.21:46615 - 50661 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000457862s
[INFO] 10.244.0.21:46615 - 17502 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000326654s
[INFO] 10.244.0.21:46615 - 17156 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000140882s
[INFO] 10.244.0.21:46615 - 22861 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001521155s
[INFO] 10.244.0.21:56741 - 56204 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000173915s
[INFO] 10.244.0.21:46615 - 15934 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005144893s
[INFO] 10.244.0.21:37711 - 58923 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000177698s
[INFO] 10.244.0.21:46615 - 29702 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143286s
[INFO] 10.244.0.21:37711 - 7022 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071647s
[INFO] 10.244.0.21:56741 - 6687 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042962s
[INFO] 10.244.0.21:56741 - 26371 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0001259s
[INFO] 10.244.0.21:37711 - 64365 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049633s
[INFO] 10.244.0.21:56741 - 32583 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000124866s
[INFO] 10.244.0.21:37711 - 45083 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000371159s
[INFO] 10.244.0.21:56741 - 32129 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000277547s
[INFO] 10.244.0.21:37711 - 18429 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064763s
[INFO] 10.244.0.21:37711 - 39227 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000129756s
[INFO] 10.244.0.21:56741 - 6964 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074905s
[INFO] 10.244.0.21:56741 - 44958 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002550805s
[INFO] 10.244.0.21:37711 - 19458 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001540157s
[INFO] 10.244.0.21:56741 - 43079 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002301485s
[INFO] 10.244.0.21:37711 - 64251 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010764904s
[INFO] 10.244.0.21:56741 - 24811 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064278s
[INFO] 10.244.0.21:37711 - 26496 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101653s
==> describe nodes <==
Name: addons-018527
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-018527
kubernetes.io/os=linux
minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
minikube.k8s.io/name=addons-018527
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_10T17_30_26_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-018527
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 10 Sep 2024 17:30:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-018527
AcquireTime: <unset>
RenewTime: Tue, 10 Sep 2024 17:43:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 10 Sep 2024 17:39:06 +0000 Tue, 10 Sep 2024 17:30:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 10 Sep 2024 17:39:06 +0000 Tue, 10 Sep 2024 17:30:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 10 Sep 2024 17:39:06 +0000 Tue, 10 Sep 2024 17:30:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 10 Sep 2024 17:39:06 +0000 Tue, 10 Sep 2024 17:30:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-018527
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: efffff036ca740c4bf5a4a66d6c81e7f
System UUID: 63da9386-f453-442b-9310-01906323f05d
Boot ID: 5dfcb38b-fd71-4dbc-a44d-87cb8fa8678e
Kernel Version: 5.15.0-1068-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default cloud-spanner-emulator-769b77f747-5jq7w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
default hello-world-app-55bf9c44b4-72dss 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20s
gcp-auth gcp-auth-89d5ffd79-sxmzn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system coredns-6f6b679f8f-sdtps 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-018527 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-018527 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-018527 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-xdjgm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-018527 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-nzqkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-xqkbq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-2z6kf 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 298Mi (3%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m (x8 over 12m) kubelet Node addons-018527 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m (x7 over 12m) kubelet Node addons-018527 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m (x7 over 12m) kubelet Node addons-018527 status is now: NodeHasSufficientPID
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-018527 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-018527 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-018527 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-018527 event: Registered Node addons-018527 in Controller
Normal CIDRAssignmentFailed 12m cidrAllocator Node addons-018527 status is now: CIDRAssignmentFailed
==> dmesg <==
[Sep10 17:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014929] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.479642] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.766149] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.162243] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [0355e6ec3484] <==
{"level":"info","ts":"2024-09-10T17:30:20.363702Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-10T17:30:20.364480Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-09-10T17:30:20.998373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-10T17:30:20.998608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-10T17:30:20.998771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-10T17:30:20.998922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-10T17:30:20.999073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-10T17:30:20.999204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-10T17:30:20.999326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-10T17:30:21.000857Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-10T17:30:21.003974Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-018527 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-10T17:30:21.004336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-10T17:30:21.004834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-10T17:30:21.005171Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-10T17:30:21.005391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-10T17:30:21.005507Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-10T17:30:21.006471Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-10T17:30:21.007714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-10T17:30:21.015368Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-10T17:30:21.016567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-10T17:30:21.016805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-10T17:30:21.022363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-10T17:40:21.095989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1883}
{"level":"info","ts":"2024-09-10T17:40:21.143940Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1883,"took":"46.489615ms","hash":1121580216,"current-db-size-bytes":9035776,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5070848,"current-db-size-in-use":"5.1 MB"}
{"level":"info","ts":"2024-09-10T17:40:21.143993Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1121580216,"revision":1883,"compact-revision":-1}
==> gcp-auth [deaaecc75b0c] <==
2024/09/10 17:33:17 GCP Auth Webhook started!
2024/09/10 17:33:35 Ready to marshal response ...
2024/09/10 17:33:35 Ready to write response ...
2024/09/10 17:33:35 Ready to marshal response ...
2024/09/10 17:33:35 Ready to write response ...
2024/09/10 17:34:00 Ready to marshal response ...
2024/09/10 17:34:00 Ready to write response ...
2024/09/10 17:34:00 Ready to marshal response ...
2024/09/10 17:34:00 Ready to write response ...
2024/09/10 17:34:00 Ready to marshal response ...
2024/09/10 17:34:00 Ready to write response ...
2024/09/10 17:42:09 Ready to marshal response ...
2024/09/10 17:42:09 Ready to write response ...
2024/09/10 17:42:15 Ready to marshal response ...
2024/09/10 17:42:15 Ready to write response ...
2024/09/10 17:42:21 Ready to marshal response ...
2024/09/10 17:42:21 Ready to write response ...
2024/09/10 17:42:57 Ready to marshal response ...
2024/09/10 17:42:57 Ready to write response ...
2024/09/10 17:43:07 Ready to marshal response ...
2024/09/10 17:43:07 Ready to write response ...
2024/09/10 17:43:17 Ready to marshal response ...
2024/09/10 17:43:17 Ready to write response ...
2024/09/10 17:43:17 Ready to marshal response ...
2024/09/10 17:43:17 Ready to write response ...
==> kernel <==
17:43:17 up 25 min, 0 users, load average: 0.82, 0.82, 0.75
Linux addons-018527 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [a4dbcd9b4921] <==
I0910 17:33:51.384011 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0910 17:33:51.520525 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0910 17:33:51.828059 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0910 17:33:51.841981 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0910 17:33:51.898238 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0910 17:33:51.979575 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0910 17:33:52.398396 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0910 17:33:52.565453 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0910 17:42:16.590885 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0910 17:42:37.987561 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0910 17:42:37.987617 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0910 17:42:38.023679 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0910 17:42:38.024044 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0910 17:42:38.047826 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0910 17:42:38.047914 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0910 17:42:38.200291 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0910 17:42:38.200343 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0910 17:42:39.032236 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0910 17:42:39.200769 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0910 17:42:39.234789 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0910 17:42:51.904468 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0910 17:42:53.044028 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0910 17:42:57.597785 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0910 17:42:57.923434 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.189.109"}
I0910 17:43:07.600575 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.193.86"}
==> kube-controller-manager [48092f20ec16] <==
I0910 17:43:01.073855 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0910 17:43:01.073918 1 shared_informer.go:320] Caches are synced for garbage collector
I0910 17:43:02.166146 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
W0910 17:43:03.020088 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:03.020132 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0910 17:43:03.495551 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:03.495593 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0910 17:43:07.434949 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.960852ms"
I0910 17:43:07.443341 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.342573ms"
I0910 17:43:07.443463 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="83.241µs"
I0910 17:43:07.463162 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.872µs"
W0910 17:43:08.045431 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:08.045524 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0910 17:43:09.411783 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.303899ms"
I0910 17:43:09.411854 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.009µs"
I0910 17:43:09.992125 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
I0910 17:43:09.996764 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.448µs"
I0910 17:43:09.998756 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
W0910 17:43:11.243948 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:11.244003 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0910 17:43:11.582105 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:11.582149 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0910 17:43:13.501367 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0910 17:43:13.501429 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0910 17:43:16.012204 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="18.831µs"
==> kube-proxy [039472306dbd] <==
I0910 17:30:32.373714 1 server_linux.go:66] "Using iptables proxy"
I0910 17:30:32.580649 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0910 17:30:32.580727 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0910 17:30:32.612097 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0910 17:30:32.612175 1 server_linux.go:169] "Using iptables Proxier"
I0910 17:30:32.617465 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0910 17:30:32.617912 1 server.go:483] "Version info" version="v1.31.0"
I0910 17:30:32.617930 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0910 17:30:32.619549 1 config.go:197] "Starting service config controller"
I0910 17:30:32.619574 1 shared_informer.go:313] Waiting for caches to sync for service config
I0910 17:30:32.619596 1 config.go:104] "Starting endpoint slice config controller"
I0910 17:30:32.619601 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0910 17:30:32.624262 1 config.go:326] "Starting node config controller"
I0910 17:30:32.624293 1 shared_informer.go:313] Waiting for caches to sync for node config
I0910 17:30:32.721993 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0910 17:30:32.722062 1 shared_informer.go:320] Caches are synced for service config
I0910 17:30:32.728680 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [4124297d4675] <==
W0910 17:30:23.278712 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0910 17:30:23.278753 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0910 17:30:23.278808 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0910 17:30:23.278820 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0910 17:30:23.278976 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0910 17:30:23.278993 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.104346 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0910 17:30:24.104622 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0910 17:30:24.134528 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0910 17:30:24.134644 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.243702 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0910 17:30:24.243744 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.260935 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0910 17:30:24.260983 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.323350 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0910 17:30:24.323635 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.355989 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0910 17:30:24.356261 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.376994 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0910 17:30:24.377036 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.435413 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0910 17:30:24.435635 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0910 17:30:24.473199 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0910 17:30:24.473402 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0910 17:30:26.647918 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.560507 2338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bsv8w\" (UniqueName: \"kubernetes.io/projected/4ac3168f-0bcd-4153-867b-4c58e4383c15-kube-api-access-bsv8w\") on node \"addons-018527\" DevicePath \"\""
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.613074 2338 scope.go:117] "RemoveContainer" containerID="1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
Sep 10 17:43:16 addons-018527 kubelet[2338]: E0910 17:43:16.614564 2338 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e" containerID="1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.614664 2338 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"} err="failed to get container status \"1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1b8e9098d22b005a43ea1d02780f940f328b7e811f1b3db2d3f5a7a90973eb8e"
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.863540 2338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8ck\" (UniqueName: \"kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck\") pod \"9ffaedc2-7aad-4454-b435-9dc17bafb9aa\" (UID: \"9ffaedc2-7aad-4454-b435-9dc17bafb9aa\") "
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.876490 2338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck" (OuterVolumeSpecName: "kube-api-access-pf8ck") pod "9ffaedc2-7aad-4454-b435-9dc17bafb9aa" (UID: "9ffaedc2-7aad-4454-b435-9dc17bafb9aa"). InnerVolumeSpecName "kube-api-access-pf8ck". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 10 17:43:16 addons-018527 kubelet[2338]: I0910 17:43:16.964398 2338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pf8ck\" (UniqueName: \"kubernetes.io/projected/9ffaedc2-7aad-4454-b435-9dc17bafb9aa-kube-api-access-pf8ck\") on node \"addons-018527\" DevicePath \"\""
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.718502 2338 scope.go:117] "RemoveContainer" containerID="fcc9398f81a125dba8d2ec3f9571af37a38e16c0e9fe162bf40e7564d804f5a6"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.890588 2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13adb992-3006-4269-bc2a-255a3908ac95" path="/var/lib/kubelet/pods/13adb992-3006-4269-bc2a-255a3908ac95/volumes"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.891155 2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" path="/var/lib/kubelet/pods/4ac3168f-0bcd-4153-867b-4c58e4383c15/volumes"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.891542 2338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" path="/var/lib/kubelet/pods/9ffaedc2-7aad-4454-b435-9dc17bafb9aa/volumes"
Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893363 2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" containerName="registry"
Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893398 2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9aa821c4-0762-41d7-918a-69e014935d35" containerName="controller"
Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893409 2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d807dd65-94ff-458f-90b4-26a6a55d5921" containerName="minikube-ingress-dns"
Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893416 2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" containerName="registry-proxy"
Sep 10 17:43:17 addons-018527 kubelet[2338]: E0910 17:43:17.893425 2338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d254e894-5053-48de-8b53-ba82389fc06c" containerName="gadget"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893461 2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa821c4-0762-41d7-918a-69e014935d35" containerName="controller"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893471 2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffaedc2-7aad-4454-b435-9dc17bafb9aa" containerName="registry-proxy"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893478 2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="d807dd65-94ff-458f-90b4-26a6a55d5921" containerName="minikube-ingress-dns"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893484 2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac3168f-0bcd-4153-867b-4c58e4383c15" containerName="registry"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.893490 2338 memory_manager.go:354] "RemoveStaleState removing state" podUID="d254e894-5053-48de-8b53-ba82389fc06c" containerName="gadget"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976667 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdv4q\" (UniqueName: \"kubernetes.io/projected/2f450bd5-2701-449b-a573-739a33a2a558-kube-api-access-mdv4q\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976721 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2f450bd5-2701-449b-a573-739a33a2a558-gcp-creds\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976755 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2f450bd5-2701-449b-a573-739a33a2a558-script\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
Sep 10 17:43:17 addons-018527 kubelet[2338]: I0910 17:43:17.976780 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2f450bd5-2701-449b-a573-739a33a2a558-data\") pod \"helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d\" (UID: \"2f450bd5-2701-449b-a573-739a33a2a558\") " pod="local-path-storage/helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d"
==> storage-provisioner [3ae63920702c] <==
I0910 17:30:38.677855 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0910 17:30:38.795132 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0910 17:30:38.795202 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0910 17:30:38.835889 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0910 17:30:38.836066 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1!
I0910 17:30:38.836990 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cee871a3-ba07-4699-8ab1-d63f5152f32e", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1 became leader
I0910 17:30:38.936524 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-018527_0e9ffa8d-f6f7-4916-91bd-a91b67c325c1!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-018527 -n addons-018527
helpers_test.go:261: (dbg) Run: kubectl --context addons-018527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d: exit status 1 (152.055734ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-018527/192.168.49.2
Start Time: Tue, 10 Sep 2024 17:34:00 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldgnv (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ldgnv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m19s default-scheduler Successfully assigned default/busybox to addons-018527
Normal Pulling 7m48s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m47s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m47s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m35s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m7s (x21 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
busybox:
Image: busybox:stable
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m7428 (ro)
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-m7428:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
-- /stdout --
** stderr **
Error from server (NotFound): pods "helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-018527 describe pod busybox test-local-path helper-pod-create-pvc-a30f3b4e-5bdf-484b-a5d9-b3448a615e2d: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.17s)