=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.604183ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004533931s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00422783s
addons_test.go:342: (dbg) Run: kubectl --context addons-648158 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.108744092s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-648158 ip
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-648158 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-648158
helpers_test.go:235: (dbg) docker inspect addons-648158:
-- stdout --
[
{
"Id": "095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d",
"Created": "2024-09-12T21:45:16.454512971Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1596049,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-12T21:45:16.577305683Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:5a18b2e89815d9320db97822722b50bf88d564940d3d81fe93adf39e9c88570e",
"ResolvConfPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/hostname",
"HostsPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/hosts",
"LogPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d-json.log",
"Name": "/addons-648158",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-648158:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-648158",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f-init/diff:/var/lib/docker/overlay2/fbbc1fff48c3f03ea4a55053e2bf32977df83d1328f1e6f776215c001793c7bc/diff",
"MergedDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/merged",
"UpperDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/diff",
"WorkDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-648158",
"Source": "/var/lib/docker/volumes/addons-648158/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-648158",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-648158",
"name.minikube.sigs.k8s.io": "addons-648158",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9806d98886e0647b97e53e9d4920f70a5cd1f6fb56d19d2f1f17f1abf95b7040",
"SandboxKey": "/var/run/docker/netns/9806d98886e0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34330"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34331"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34334"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34332"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34333"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-648158": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "6d702ae16bd925e83694b0120b326ba837c5291976a7675f05fe20b814d3032c",
"EndpointID": "725f79b339d0064018f1e21e8dc1a1ae1262ef510a2665dfcbd2e8944aaac933",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-648158",
"095af0a5b484"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-648158 -n addons-648158
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-648158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 logs -n 25: (1.222738712s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | -p download-only-658229 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| delete | -p download-only-658229 | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| start | -o=json --download-only | download-only-308645 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | -p download-only-308645 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| delete | -p download-only-308645 | download-only-308645 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| delete | -p download-only-658229 | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| delete | -p download-only-308645 | download-only-308645 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| start | --download-only -p | download-docker-565752 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | download-docker-565752 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-565752 | download-docker-565752 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| start | --download-only -p | binary-mirror-696147 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | binary-mirror-696147 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:42489 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-696147 | binary-mirror-696147 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
| addons | enable dashboard -p | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | addons-648158 | | | | | |
| addons | disable dashboard -p | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | |
| | addons-648158 | | | | | |
| start | -p addons-648158 --wait=true | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:48 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-648158 addons disable | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:49 UTC | 12 Sep 24 21:49 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-648158 addons | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:57 UTC | 12 Sep 24 21:58 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-648158 addons | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-648158 addons | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
| | addons-648158 | | | | | |
| ip | addons-648158 ip | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
| addons | addons-648158 addons disable | addons-648158 | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/12 21:44:52
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0912 21:44:52.293426 1595550 out.go:345] Setting OutFile to fd 1 ...
I0912 21:44:52.293591 1595550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:44:52.293604 1595550 out.go:358] Setting ErrFile to fd 2...
I0912 21:44:52.293611 1595550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 21:44:52.293853 1595550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 21:44:52.294286 1595550 out.go:352] Setting JSON to false
I0912 21:44:52.295170 1595550 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23233,"bootTime":1726154260,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I0912 21:44:52.295245 1595550 start.go:139] virtualization:
I0912 21:44:52.297255 1595550 out.go:177] * [addons-648158] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0912 21:44:52.298472 1595550 out.go:177] - MINIKUBE_LOCATION=19616
I0912 21:44:52.298524 1595550 notify.go:220] Checking for updates...
I0912 21:44:52.301446 1595550 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0912 21:44:52.302951 1595550 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
I0912 21:44:52.304264 1595550 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
I0912 21:44:52.305594 1595550 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0912 21:44:52.306977 1595550 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0912 21:44:52.308309 1595550 driver.go:394] Setting default libvirt URI to qemu:///system
I0912 21:44:52.329697 1595550 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0912 21:44:52.329815 1595550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 21:44:52.393221 1595550 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:52.38325617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 21:44:52.393338 1595550 docker.go:318] overlay module found
I0912 21:44:52.394811 1595550 out.go:177] * Using the docker driver based on user configuration
I0912 21:44:52.396076 1595550 start.go:297] selected driver: docker
I0912 21:44:52.396092 1595550 start.go:901] validating driver "docker" against <nil>
I0912 21:44:52.396107 1595550 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0912 21:44:52.396759 1595550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 21:44:52.455832 1595550 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:52.446732647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 21:44:52.455997 1595550 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0912 21:44:52.456234 1595550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0912 21:44:52.457440 1595550 out.go:177] * Using Docker driver with root privileges
I0912 21:44:52.458509 1595550 cni.go:84] Creating CNI manager for ""
I0912 21:44:52.458534 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0912 21:44:52.458544 1595550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0912 21:44:52.458621 1595550 start.go:340] cluster config:
{Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0912 21:44:52.460430 1595550 out.go:177] * Starting "addons-648158" primary control-plane node in "addons-648158" cluster
I0912 21:44:52.461548 1595550 cache.go:121] Beginning downloading kic base image for docker with docker
I0912 21:44:52.462770 1595550 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
I0912 21:44:52.464061 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0912 21:44:52.464109 1595550 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0912 21:44:52.464120 1595550 cache.go:56] Caching tarball of preloaded images
I0912 21:44:52.464126 1595550 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
I0912 21:44:52.464198 1595550 preload.go:172] Found /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0912 21:44:52.464208 1595550 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0912 21:44:52.464588 1595550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json ...
I0912 21:44:52.464616 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json: {Name:mk39e0bed83dea5ddf12769e075879530e448b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:44:52.478580 1595550 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
I0912 21:44:52.478708 1595550 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
I0912 21:44:52.478731 1595550 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
I0912 21:44:52.478736 1595550 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
I0912 21:44:52.478748 1595550 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
I0912 21:44:52.478757 1595550 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
I0912 21:45:09.501036 1595550 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
I0912 21:45:09.501078 1595550 cache.go:194] Successfully downloaded all kic artifacts
I0912 21:45:09.501125 1595550 start.go:360] acquireMachinesLock for addons-648158: {Name:mkf47fbdfabd638c92e4e58b5d8a772d37a8e926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0912 21:45:09.501258 1595550 start.go:364] duration metric: took 108.765µs to acquireMachinesLock for "addons-648158"
I0912 21:45:09.501293 1595550 start.go:93] Provisioning new machine with config: &{Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0912 21:45:09.501373 1595550 start.go:125] createHost starting for "" (driver="docker")
I0912 21:45:09.502918 1595550 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0912 21:45:09.503157 1595550 start.go:159] libmachine.API.Create for "addons-648158" (driver="docker")
I0912 21:45:09.503192 1595550 client.go:168] LocalClient.Create starting
I0912 21:45:09.503326 1595550 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem
I0912 21:45:10.436330 1595550 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem
I0912 21:45:10.626763 1595550 cli_runner.go:164] Run: docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0912 21:45:10.644364 1595550 cli_runner.go:211] docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0912 21:45:10.644467 1595550 network_create.go:284] running [docker network inspect addons-648158] to gather additional debugging logs...
I0912 21:45:10.644489 1595550 cli_runner.go:164] Run: docker network inspect addons-648158
W0912 21:45:10.668333 1595550 cli_runner.go:211] docker network inspect addons-648158 returned with exit code 1
I0912 21:45:10.668367 1595550 network_create.go:287] error running [docker network inspect addons-648158]: docker network inspect addons-648158: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-648158 not found
I0912 21:45:10.668391 1595550 network_create.go:289] output of [docker network inspect addons-648158]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-648158 not found
** /stderr **
I0912 21:45:10.668513 1595550 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 21:45:10.686214 1595550 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017cd350}
I0912 21:45:10.686265 1595550 network_create.go:124] attempt to create docker network addons-648158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0912 21:45:10.686328 1595550 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-648158 addons-648158
I0912 21:45:10.754882 1595550 network_create.go:108] docker network addons-648158 192.168.49.0/24 created
I0912 21:45:10.754915 1595550 kic.go:121] calculated static IP "192.168.49.2" for the "addons-648158" container
I0912 21:45:10.755010 1595550 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0912 21:45:10.770554 1595550 cli_runner.go:164] Run: docker volume create addons-648158 --label name.minikube.sigs.k8s.io=addons-648158 --label created_by.minikube.sigs.k8s.io=true
I0912 21:45:10.787372 1595550 oci.go:103] Successfully created a docker volume addons-648158
I0912 21:45:10.787472 1595550 cli_runner.go:164] Run: docker run --rm --name addons-648158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --entrypoint /usr/bin/test -v addons-648158:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
I0912 21:45:12.757080 1595550 cli_runner.go:217] Completed: docker run --rm --name addons-648158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --entrypoint /usr/bin/test -v addons-648158:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (1.969524908s)
I0912 21:45:12.757110 1595550 oci.go:107] Successfully prepared a docker volume addons-648158
I0912 21:45:12.757132 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0912 21:45:12.757152 1595550 kic.go:194] Starting extracting preloaded images to volume ...
I0912 21:45:12.757240 1595550 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-648158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
I0912 21:45:16.386032 1595550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-648158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (3.628751783s)
I0912 21:45:16.386076 1595550 kic.go:203] duration metric: took 3.628909301s to extract preloaded images to volume ...
W0912 21:45:16.386235 1595550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0912 21:45:16.386347 1595550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0912 21:45:16.440109 1595550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-648158 --name addons-648158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-648158 --network addons-648158 --ip 192.168.49.2 --volume addons-648158:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
I0912 21:45:16.738593 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Running}}
I0912 21:45:16.760281 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:16.783349 1595550 cli_runner.go:164] Run: docker exec addons-648158 stat /var/lib/dpkg/alternatives/iptables
I0912 21:45:16.863499 1595550 oci.go:144] the created container "addons-648158" has a running status.
I0912 21:45:16.863532 1595550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa...
I0912 21:45:17.481383 1595550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0912 21:45:17.512071 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:17.531116 1595550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0912 21:45:17.531135 1595550 kic_runner.go:114] Args: [docker exec --privileged addons-648158 chown docker:docker /home/docker/.ssh/authorized_keys]
I0912 21:45:17.600958 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:17.620855 1595550 machine.go:93] provisionDockerMachine start ...
I0912 21:45:17.620940 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:17.640331 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:17.640736 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:17.640749 1595550 main.go:141] libmachine: About to run SSH command:
hostname
I0912 21:45:17.796831 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-648158
I0912 21:45:17.796905 1595550 ubuntu.go:169] provisioning hostname "addons-648158"
I0912 21:45:17.797060 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:17.816878 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:17.817148 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:17.817169 1595550 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-648158 && echo "addons-648158" | sudo tee /etc/hostname
I0912 21:45:17.969507 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-648158
I0912 21:45:17.969638 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:17.986753 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:17.986999 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:17.987021 1595550 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-648158' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-648158/g' /etc/hosts;
else
echo '127.0.1.1 addons-648158' | sudo tee -a /etc/hosts;
fi
fi
I0912 21:45:18.129467 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0912 21:45:18.129492 1595550 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-1589418/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-1589418/.minikube}
I0912 21:45:18.129510 1595550 ubuntu.go:177] setting up certificates
I0912 21:45:18.129520 1595550 provision.go:84] configureAuth start
I0912 21:45:18.129583 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
I0912 21:45:18.145625 1595550 provision.go:143] copyHostCerts
I0912 21:45:18.145718 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.pem (1082 bytes)
I0912 21:45:18.145853 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/cert.pem (1123 bytes)
I0912 21:45:18.145926 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/key.pem (1679 bytes)
I0912 21:45:18.145991 1595550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem org=jenkins.addons-648158 san=[127.0.0.1 192.168.49.2 addons-648158 localhost minikube]
I0912 21:45:18.407925 1595550 provision.go:177] copyRemoteCerts
I0912 21:45:18.408005 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0912 21:45:18.408050 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:18.425554 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:18.526748 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0912 21:45:18.552750 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0912 21:45:18.575890 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0912 21:45:18.599170 1595550 provision.go:87] duration metric: took 469.636361ms to configureAuth
I0912 21:45:18.599197 1595550 ubuntu.go:193] setting minikube options for container-runtime
I0912 21:45:18.599388 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:45:18.599439 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:18.616470 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:18.616717 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:18.616735 1595550 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0912 21:45:18.757570 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0912 21:45:18.757634 1595550 ubuntu.go:71] root file system type: overlay
I0912 21:45:18.757762 1595550 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0912 21:45:18.757832 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:18.774755 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:18.775011 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:18.775100 1595550 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0912 21:45:18.926075 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0912 21:45:18.926206 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:18.943690 1595550 main.go:141] libmachine: Using SSH client type: native
I0912 21:45:18.943950 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil> [] 0s} 127.0.0.1 34330 <nil> <nil>}
I0912 21:45:18.943974 1595550 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0912 21:45:19.713569 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:36.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-12 21:45:18.917186149 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0912 21:45:19.713599 1595550 machine.go:96] duration metric: took 2.092725054s to provisionDockerMachine
I0912 21:45:19.713614 1595550 client.go:171] duration metric: took 10.210408914s to LocalClient.Create
I0912 21:45:19.713626 1595550 start.go:167] duration metric: took 10.210469737s to libmachine.API.Create "addons-648158"
I0912 21:45:19.713637 1595550 start.go:293] postStartSetup for "addons-648158" (driver="docker")
I0912 21:45:19.713651 1595550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0912 21:45:19.713720 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0912 21:45:19.713769 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:19.733297 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:19.830114 1595550 ssh_runner.go:195] Run: cat /etc/os-release
I0912 21:45:19.833325 1595550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0912 21:45:19.833411 1595550 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0912 21:45:19.833428 1595550 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0912 21:45:19.833436 1595550 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0912 21:45:19.833448 1595550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1589418/.minikube/addons for local assets ...
I0912 21:45:19.833535 1595550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1589418/.minikube/files for local assets ...
I0912 21:45:19.833561 1595550 start.go:296] duration metric: took 119.914562ms for postStartSetup
I0912 21:45:19.833889 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
I0912 21:45:19.853376 1595550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json ...
I0912 21:45:19.853657 1595550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0912 21:45:19.853719 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:19.870323 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:19.965999 1595550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0912 21:45:19.970590 1595550 start.go:128] duration metric: took 10.469199921s to createHost
I0912 21:45:19.970613 1595550 start.go:83] releasing machines lock for "addons-648158", held for 10.46934336s
I0912 21:45:19.970684 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
I0912 21:45:19.987734 1595550 ssh_runner.go:195] Run: cat /version.json
I0912 21:45:19.987793 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:19.988088 1595550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0912 21:45:19.988151 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:20.013682 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:20.029638 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:20.247428 1595550 ssh_runner.go:195] Run: systemctl --version
I0912 21:45:20.251674 1595550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0912 21:45:20.255879 1595550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0912 21:45:20.281724 1595550 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0912 21:45:20.281806 1595550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0912 21:45:20.313660 1595550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0912 21:45:20.313688 1595550 start.go:495] detecting cgroup driver to use...
I0912 21:45:20.313723 1595550 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0912 21:45:20.313826 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0912 21:45:20.329940 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0912 21:45:20.340341 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0912 21:45:20.350162 1595550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0912 21:45:20.350241 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0912 21:45:20.359880 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0912 21:45:20.369781 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0912 21:45:20.379531 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0912 21:45:20.389728 1595550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0912 21:45:20.398677 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0912 21:45:20.408119 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0912 21:45:20.417909 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0912 21:45:20.427500 1595550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0912 21:45:20.436059 1595550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0912 21:45:20.444410 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:20.529407 1595550 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0912 21:45:20.622851 1595550 start.go:495] detecting cgroup driver to use...
I0912 21:45:20.622904 1595550 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0912 21:45:20.622975 1595550 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0912 21:45:20.636940 1595550 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0912 21:45:20.637067 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0912 21:45:20.651183 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0912 21:45:20.674013 1595550 ssh_runner.go:195] Run: which cri-dockerd
I0912 21:45:20.678835 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0912 21:45:20.687752 1595550 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0912 21:45:20.713613 1595550 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0912 21:45:20.818050 1595550 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0912 21:45:20.921095 1595550 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0912 21:45:20.921293 1595550 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0912 21:45:20.942326 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:21.042133 1595550 ssh_runner.go:195] Run: sudo systemctl restart docker
I0912 21:45:21.317629 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0912 21:45:21.329792 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0912 21:45:21.341876 1595550 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0912 21:45:21.433675 1595550 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0912 21:45:21.531235 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:21.616716 1595550 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0912 21:45:21.630833 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0912 21:45:21.641937 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:21.726283 1595550 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0912 21:45:21.800039 1595550 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0912 21:45:21.800196 1595550 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0912 21:45:21.804546 1595550 start.go:563] Will wait 60s for crictl version
I0912 21:45:21.804608 1595550 ssh_runner.go:195] Run: which crictl
I0912 21:45:21.808131 1595550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0912 21:45:21.846623 1595550 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0912 21:45:21.846763 1595550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0912 21:45:21.869582 1595550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0912 21:45:21.896391 1595550 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0912 21:45:21.896558 1595550 cli_runner.go:164] Run: docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 21:45:21.912634 1595550 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0912 21:45:21.916282 1595550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0912 21:45:21.927505 1595550 kubeadm.go:883] updating cluster {Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0912 21:45:21.927629 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0912 21:45:21.927693 1595550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0912 21:45:21.946232 1595550 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0912 21:45:21.946254 1595550 docker.go:615] Images already preloaded, skipping extraction
I0912 21:45:21.946321 1595550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0912 21:45:21.963792 1595550 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0912 21:45:21.963818 1595550 cache_images.go:84] Images are preloaded, skipping loading
I0912 21:45:21.963837 1595550 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0912 21:45:21.963939 1595550 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-648158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0912 21:45:21.964010 1595550 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0912 21:45:22.020002 1595550 cni.go:84] Creating CNI manager for ""
I0912 21:45:22.020035 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0912 21:45:22.020046 1595550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0912 21:45:22.020087 1595550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-648158 NodeName:addons-648158 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0912 21:45:22.020284 1595550 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-648158"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0912 21:45:22.020365 1595550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0912 21:45:22.029600 1595550 binaries.go:44] Found k8s binaries, skipping transfer
I0912 21:45:22.029678 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0912 21:45:22.038688 1595550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0912 21:45:22.057249 1595550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0912 21:45:22.075395 1595550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0912 21:45:22.093938 1595550 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0912 21:45:22.097624 1595550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0912 21:45:22.108291 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:22.196420 1595550 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0912 21:45:22.210471 1595550 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158 for IP: 192.168.49.2
I0912 21:45:22.210536 1595550 certs.go:194] generating shared ca certs ...
I0912 21:45:22.210567 1595550 certs.go:226] acquiring lock for ca certs: {Name:mkbf22811db03e42b0f0c081454eb3f99708b183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:22.211317 1595550 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key
I0912 21:45:22.433480 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt ...
I0912 21:45:22.433513 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt: {Name:mk72e5f935fec294e69009cf4aea31435c70e4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:22.433737 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key ...
I0912 21:45:22.433751 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key: {Name:mkb6385cc4d730e4d7a49f02cefcaae4249d85d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:22.434255 1595550 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key
I0912 21:45:22.942358 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt ...
I0912 21:45:22.942388 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt: {Name:mk450a2530aa8953153326429aca610c57afd125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:22.942581 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key ...
I0912 21:45:22.942604 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key: {Name:mk66c0ef036ecfd03ff400c618ae21c60bb0c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:22.943068 1595550 certs.go:256] generating profile certs ...
I0912 21:45:22.943139 1595550 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key
I0912 21:45:22.943163 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt with IP's: []
I0912 21:45:23.402774 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt ...
I0912 21:45:23.402807 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: {Name:mk1f2498fc67c90097a8f66b5054399b23fe170f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:23.403479 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key ...
I0912 21:45:23.403495 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key: {Name:mkdfcc07aa074ee1904526cb21a167c2a8cecfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:23.403596 1595550 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0
I0912 21:45:23.403618 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0912 21:45:23.734010 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 ...
I0912 21:45:23.734040 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0: {Name:mk4f0d9173e6a442fd07c768c295ccf81e51b6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:23.734700 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0 ...
I0912 21:45:23.734717 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0: {Name:mk2326a38e6bc7100b28b1adaef28869bbabcc2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:23.734811 1595550 certs.go:381] copying /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt
I0912 21:45:23.734896 1595550 certs.go:385] copying /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key
I0912 21:45:23.734959 1595550 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key
I0912 21:45:23.734982 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt with IP's: []
I0912 21:45:24.311226 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt ...
I0912 21:45:24.311262 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt: {Name:mkd035209e7f3c86c91125474d5aebc2da916a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:24.311469 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key ...
I0912 21:45:24.311486 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key: {Name:mk8af13efd87462a8c15a1a6061ce1153fe9fa6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:24.312124 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem (1675 bytes)
I0912 21:45:24.312171 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem (1082 bytes)
I0912 21:45:24.312202 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem (1123 bytes)
I0912 21:45:24.312231 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem (1679 bytes)
I0912 21:45:24.312916 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0912 21:45:24.338386 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0912 21:45:24.362685 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0912 21:45:24.386838 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0912 21:45:24.410216 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0912 21:45:24.433520 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0912 21:45:24.457173 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0912 21:45:24.480614 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0912 21:45:24.504138 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0912 21:45:24.527993 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0912 21:45:24.545457 1595550 ssh_runner.go:195] Run: openssl version
I0912 21:45:24.550812 1595550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0912 21:45:24.560312 1595550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0912 21:45:24.563830 1595550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:45 /usr/share/ca-certificates/minikubeCA.pem
I0912 21:45:24.563896 1595550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0912 21:45:24.570573 1595550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0912 21:45:24.579843 1595550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0912 21:45:24.583111 1595550 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0912 21:45:24.583160 1595550 kubeadm.go:392] StartCluster: {Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0912 21:45:24.583283 1595550 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0912 21:45:24.615777 1595550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0912 21:45:24.625563 1595550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0912 21:45:24.634044 1595550 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0912 21:45:24.634109 1595550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0912 21:45:24.643924 1595550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0912 21:45:24.643945 1595550 kubeadm.go:157] found existing configuration files:
I0912 21:45:24.643995 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0912 21:45:24.652876 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0912 21:45:24.652942 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0912 21:45:24.661193 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0912 21:45:24.670699 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0912 21:45:24.670772 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0912 21:45:24.679137 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0912 21:45:24.687316 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0912 21:45:24.687381 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0912 21:45:24.695484 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0912 21:45:24.704182 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0912 21:45:24.704280 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0912 21:45:24.712442 1595550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0912 21:45:24.754741 1595550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0912 21:45:24.754802 1595550 kubeadm.go:310] [preflight] Running pre-flight checks
I0912 21:45:24.777126 1595550 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0912 21:45:24.777202 1595550 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1068-aws[0m
I0912 21:45:24.777239 1595550 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0912 21:45:24.777287 1595550 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0912 21:45:24.777338 1595550 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0912 21:45:24.777388 1595550 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0912 21:45:24.777438 1595550 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0912 21:45:24.777488 1595550 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0912 21:45:24.777539 1595550 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0912 21:45:24.777585 1595550 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0912 21:45:24.777636 1595550 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0912 21:45:24.777683 1595550 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0912 21:45:24.836755 1595550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0912 21:45:24.836867 1595550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0912 21:45:24.837004 1595550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0912 21:45:24.848565 1595550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0912 21:45:24.854378 1595550 out.go:235] - Generating certificates and keys ...
I0912 21:45:24.854489 1595550 kubeadm.go:310] [certs] Using existing ca certificate authority
I0912 21:45:24.854580 1595550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0912 21:45:24.970260 1595550 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0912 21:45:25.558010 1595550 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0912 21:45:26.640945 1595550 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0912 21:45:26.867119 1595550 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0912 21:45:27.247898 1595550 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0912 21:45:27.248175 1595550 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-648158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0912 21:45:27.856395 1595550 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0912 21:45:27.856614 1595550 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-648158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0912 21:45:27.962437 1595550 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0912 21:45:29.036280 1595550 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0912 21:45:30.133858 1595550 kubeadm.go:310] [certs] Generating "sa" key and public key
I0912 21:45:30.134163 1595550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0912 21:45:30.381887 1595550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0912 21:45:30.620823 1595550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0912 21:45:30.867160 1595550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0912 21:45:31.507155 1595550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0912 21:45:32.263640 1595550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0912 21:45:32.264398 1595550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0912 21:45:32.267495 1595550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0912 21:45:32.271086 1595550 out.go:235] - Booting up control plane ...
I0912 21:45:32.271192 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0912 21:45:32.271268 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0912 21:45:32.272917 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0912 21:45:32.284203 1595550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0912 21:45:32.290357 1595550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0912 21:45:32.290680 1595550 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0912 21:45:32.405529 1595550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0912 21:45:32.405648 1595550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0912 21:45:34.404315 1595550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.000799788s
I0912 21:45:34.404421 1595550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0912 21:45:40.406409 1595550 kubeadm.go:310] [api-check] The API server is healthy after 6.002087447s
I0912 21:45:40.425225 1595550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0912 21:45:40.442830 1595550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0912 21:45:40.465774 1595550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0912 21:45:40.465966 1595550 kubeadm.go:310] [mark-control-plane] Marking the node addons-648158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0912 21:45:40.476221 1595550 kubeadm.go:310] [bootstrap-token] Using token: xaukdn.izn2qramjufoi8qt
I0912 21:45:40.478979 1595550 out.go:235] - Configuring RBAC rules ...
I0912 21:45:40.479107 1595550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0912 21:45:40.483534 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0912 21:45:40.493696 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0912 21:45:40.498584 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0912 21:45:40.503705 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0912 21:45:40.507697 1595550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0912 21:45:40.812885 1595550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0912 21:45:41.242035 1595550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0912 21:45:41.813576 1595550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0912 21:45:41.814549 1595550 kubeadm.go:310]
I0912 21:45:41.814632 1595550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0912 21:45:41.814644 1595550 kubeadm.go:310]
I0912 21:45:41.814719 1595550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0912 21:45:41.814731 1595550 kubeadm.go:310]
I0912 21:45:41.814766 1595550 kubeadm.go:310] mkdir -p $HOME/.kube
I0912 21:45:41.814827 1595550 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0912 21:45:41.814881 1595550 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0912 21:45:41.814894 1595550 kubeadm.go:310]
I0912 21:45:41.814947 1595550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0912 21:45:41.814963 1595550 kubeadm.go:310]
I0912 21:45:41.815010 1595550 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0912 21:45:41.815019 1595550 kubeadm.go:310]
I0912 21:45:41.815069 1595550 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0912 21:45:41.815144 1595550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0912 21:45:41.815216 1595550 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0912 21:45:41.815226 1595550 kubeadm.go:310]
I0912 21:45:41.815307 1595550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0912 21:45:41.815393 1595550 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0912 21:45:41.815452 1595550 kubeadm.go:310]
I0912 21:45:41.815537 1595550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xaukdn.izn2qramjufoi8qt \
I0912 21:45:41.815641 1595550 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:889bf8e7f7fa68711c5600116be7317db5666eb96597e491e0dfca9010b6a355 \
I0912 21:45:41.815665 1595550 kubeadm.go:310] --control-plane
I0912 21:45:41.815674 1595550 kubeadm.go:310]
I0912 21:45:41.815768 1595550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0912 21:45:41.815779 1595550 kubeadm.go:310]
I0912 21:45:41.819215 1595550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xaukdn.izn2qramjufoi8qt \
I0912 21:45:41.819329 1595550 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:889bf8e7f7fa68711c5600116be7317db5666eb96597e491e0dfca9010b6a355
I0912 21:45:41.819628 1595550 kubeadm.go:310] W0912 21:45:24.751449 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0912 21:45:41.819905 1595550 kubeadm.go:310] W0912 21:45:24.752316 1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0912 21:45:41.820110 1595550 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
I0912 21:45:41.820213 1595550 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0912 21:45:41.820231 1595550 cni.go:84] Creating CNI manager for ""
I0912 21:45:41.820246 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0912 21:45:41.823242 1595550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0912 21:45:41.825877 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0912 21:45:41.835675 1595550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0912 21:45:41.858132 1595550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0912 21:45:41.858271 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:41.858366 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-648158 minikube.k8s.io/updated_at=2024_09_12T21_45_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-648158 minikube.k8s.io/primary=true
I0912 21:45:41.992266 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:41.992334 1595550 ops.go:34] apiserver oom_adj: -16
I0912 21:45:42.493155 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:42.992387 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:43.492437 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:43.992931 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:44.493137 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:44.992445 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:45.493002 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:45.992849 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:46.492326 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0912 21:45:46.635777 1595550 kubeadm.go:1113] duration metric: took 4.777556205s to wait for elevateKubeSystemPrivileges
I0912 21:45:46.635803 1595550 kubeadm.go:394] duration metric: took 22.052646368s to StartCluster
I0912 21:45:46.635820 1595550 settings.go:142] acquiring lock: {Name:mke0a909d4fb4359a87942368342244776ea0df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:46.635937 1595550 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19616-1589418/kubeconfig
I0912 21:45:46.636319 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/kubeconfig: {Name:mk5c78d80e4776a3c25d7663bf634139150573f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0912 21:45:46.636970 1595550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0912 21:45:46.637094 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0912 21:45:46.637340 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:45:46.637369 1595550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0912 21:45:46.637439 1595550 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-648158"
I0912 21:45:46.637450 1595550 addons.go:69] Setting gcp-auth=true in profile "addons-648158"
I0912 21:45:46.637475 1595550 mustload.go:65] Loading cluster: addons-648158
I0912 21:45:46.637488 1595550 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-648158"
I0912 21:45:46.637558 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.637576 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 21:45:46.637893 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.638087 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.638489 1595550 addons.go:69] Setting ingress=true in profile "addons-648158"
I0912 21:45:46.638519 1595550 addons.go:234] Setting addon ingress=true in "addons-648158"
I0912 21:45:46.638554 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.638952 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.642940 1595550 addons.go:69] Setting cloud-spanner=true in profile "addons-648158"
I0912 21:45:46.642982 1595550 addons.go:234] Setting addon cloud-spanner=true in "addons-648158"
I0912 21:45:46.643029 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.643469 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.643977 1595550 addons.go:69] Setting ingress-dns=true in profile "addons-648158"
I0912 21:45:46.644027 1595550 addons.go:234] Setting addon ingress-dns=true in "addons-648158"
I0912 21:45:46.644121 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.644638 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.644832 1595550 addons.go:69] Setting inspektor-gadget=true in profile "addons-648158"
I0912 21:45:46.663106 1595550 addons.go:234] Setting addon inspektor-gadget=true in "addons-648158"
I0912 21:45:46.663169 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.663663 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.637446 1595550 addons.go:69] Setting default-storageclass=true in profile "addons-648158"
I0912 21:45:46.673344 1595550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-648158"
I0912 21:45:46.673803 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.644842 1595550 addons.go:69] Setting metrics-server=true in profile "addons-648158"
I0912 21:45:46.674033 1595550 addons.go:234] Setting addon metrics-server=true in "addons-648158"
I0912 21:45:46.674076 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.674947 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.644846 1595550 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-648158"
I0912 21:45:46.676648 1595550 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-648158"
I0912 21:45:46.681317 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.644849 1595550 addons.go:69] Setting registry=true in profile "addons-648158"
I0912 21:45:46.697266 1595550 addons.go:234] Setting addon registry=true in "addons-648158"
I0912 21:45:46.644852 1595550 addons.go:69] Setting storage-provisioner=true in profile "addons-648158"
I0912 21:45:46.644856 1595550 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-648158"
I0912 21:45:46.644859 1595550 addons.go:69] Setting volcano=true in profile "addons-648158"
I0912 21:45:46.644864 1595550 addons.go:69] Setting volumesnapshots=true in profile "addons-648158"
I0912 21:45:46.644889 1595550 out.go:177] * Verifying Kubernetes components...
I0912 21:45:46.637440 1595550 addons.go:69] Setting yakd=true in profile "addons-648158"
I0912 21:45:46.697837 1595550 addons.go:234] Setting addon yakd=true in "addons-648158"
I0912 21:45:46.697990 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.698617 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.715090 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.715263 1595550 addons.go:234] Setting addon storage-provisioner=true in "addons-648158"
I0912 21:45:46.715316 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.715775 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.715235 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.722724 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.736754 1595550 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-648158"
I0912 21:45:46.737206 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.752406 1595550 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0912 21:45:46.753374 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.754588 1595550 addons.go:234] Setting addon volcano=true in "addons-648158"
I0912 21:45:46.754661 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.755134 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.776675 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0912 21:45:46.779444 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0912 21:45:46.779653 1595550 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0912 21:45:46.783042 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0912 21:45:46.783275 1595550 addons.go:234] Setting addon volumesnapshots=true in "addons-648158"
I0912 21:45:46.783318 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.783785 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.804667 1595550 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0912 21:45:46.804931 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0912 21:45:46.810164 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0912 21:45:46.810835 1595550 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0912 21:45:46.810858 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0912 21:45:46.810930 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.816313 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0912 21:45:46.823996 1595550 addons.go:234] Setting addon default-storageclass=true in "addons-648158"
I0912 21:45:46.824040 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:46.824474 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:46.830021 1595550 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0912 21:45:46.831617 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0912 21:45:46.833806 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0912 21:45:46.834934 1595550 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0912 21:45:46.834949 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0912 21:45:46.835007 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.851420 1595550 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0912 21:45:46.852190 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0912 21:45:46.856514 1595550 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0912 21:45:46.862059 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0912 21:45:46.862289 1595550 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0912 21:45:46.862302 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0912 21:45:46.862367 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.872707 1595550 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0912 21:45:46.873245 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0912 21:45:46.873340 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.886842 1595550 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0912 21:45:46.889404 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0912 21:45:46.889436 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0912 21:45:46.889502 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.890363 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0912 21:45:46.890381 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0912 21:45:46.890447 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.910780 1595550 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0912 21:45:46.913786 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0912 21:45:46.913808 1595550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0912 21:45:46.913869 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:46.933215 1595550 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0912 21:45:46.942426 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0912 21:45:46.942456 1595550 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0912 21:45:46.942529 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.039988 1595550 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-648158"
I0912 21:45:47.040035 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:47.040466 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:47.044449 1595550 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0912 21:45:47.049716 1595550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0912 21:45:47.049738 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0912 21:45:47.049806 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.063165 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.068041 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.072482 1595550 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0912 21:45:47.075113 1595550 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0912 21:45:47.077648 1595550 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0912 21:45:47.080314 1595550 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0912 21:45:47.093545 1595550 out.go:177] - Using image docker.io/registry:2.8.3
I0912 21:45:47.094720 1595550 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0912 21:45:47.094744 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0912 21:45:47.094814 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.093545 1595550 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0912 21:45:47.101208 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0912 21:45:47.101229 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0912 21:45:47.101298 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.111850 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0912 21:45:47.111874 1595550 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0912 21:45:47.111948 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.115046 1595550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0912 21:45:47.115067 1595550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0912 21:45:47.115129 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.135677 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.141516 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.153737 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.163808 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.176279 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.197406 1595550 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0912 21:45:47.217509 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.261305 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.261830 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.283390 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.284376 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.287151 1595550 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0912 21:45:47.289672 1595550 out.go:177] - Using image docker.io/busybox:stable
I0912 21:45:47.292349 1595550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0912 21:45:47.292370 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0912 21:45:47.292451 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:47.292590 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.332667 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:47.872884 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0912 21:45:48.008211 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0912 21:45:48.008251 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0912 21:45:48.069924 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0912 21:45:48.077075 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0912 21:45:48.141661 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0912 21:45:48.141698 1595550 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0912 21:45:48.177696 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0912 21:45:48.217187 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0912 21:45:48.326485 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0912 21:45:48.329657 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0912 21:45:48.329694 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0912 21:45:48.384153 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0912 21:45:48.394680 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0912 21:45:48.394712 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0912 21:45:48.467347 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0912 21:45:48.467376 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0912 21:45:48.477875 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0912 21:45:48.477917 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0912 21:45:48.492289 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0912 21:45:48.492326 1595550 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0912 21:45:48.513624 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0912 21:45:48.513652 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0912 21:45:48.518672 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0912 21:45:48.572420 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0912 21:45:48.572455 1595550 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0912 21:45:48.630419 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0912 21:45:48.630463 1595550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0912 21:45:48.693051 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0912 21:45:48.693078 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0912 21:45:48.724988 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0912 21:45:48.725030 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0912 21:45:48.750237 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0912 21:45:48.750265 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0912 21:45:48.755337 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0912 21:45:48.755367 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0912 21:45:48.802251 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0912 21:45:48.802280 1595550 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0912 21:45:48.831218 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0912 21:45:48.831246 1595550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0912 21:45:48.884971 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0912 21:45:48.885000 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0912 21:45:49.056913 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0912 21:45:49.062027 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0912 21:45:49.062052 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0912 21:45:49.067749 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0912 21:45:49.067791 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0912 21:45:49.084666 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0912 21:45:49.084707 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0912 21:45:49.149870 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0912 21:45:49.149912 1595550 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0912 21:45:49.158809 1595550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.961372848s)
I0912 21:45:49.158920 1595550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.325253507s)
I0912 21:45:49.158939 1595550 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0912 21:45:49.159803 1595550 node_ready.go:35] waiting up to 6m0s for node "addons-648158" to be "Ready" ...
I0912 21:45:49.169885 1595550 node_ready.go:49] node "addons-648158" has status "Ready":"True"
I0912 21:45:49.169916 1595550 node_ready.go:38] duration metric: took 10.089341ms for node "addons-648158" to be "Ready" ...
I0912 21:45:49.169928 1595550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0912 21:45:49.179794 1595550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace to be "Ready" ...
I0912 21:45:49.340728 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0912 21:45:49.433274 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0912 21:45:49.445216 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0912 21:45:49.445243 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0912 21:45:49.502705 1595550 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0912 21:45:49.502731 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0912 21:45:49.531290 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0912 21:45:49.531324 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0912 21:45:49.666835 1595550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-648158" context rescaled to 1 replicas
I0912 21:45:49.676177 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.803253843s)
I0912 21:45:49.733977 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0912 21:45:49.734005 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0912 21:45:49.808722 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0912 21:45:49.890529 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0912 21:45:49.890555 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0912 21:45:49.959216 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0912 21:45:49.959241 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0912 21:45:50.173812 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0912 21:45:50.173838 1595550 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0912 21:45:50.254300 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0912 21:45:50.563795 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0912 21:45:50.563816 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0912 21:45:50.863267 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0912 21:45:50.863293 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0912 21:45:51.185871 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:45:51.240748 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0912 21:45:51.240785 1595550 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0912 21:45:52.066377 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0912 21:45:53.186197 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:45:53.768045 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0912 21:45:53.768128 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:53.809200 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:55.158917 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0912 21:45:55.187227 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:45:55.489207 1595550 addons.go:234] Setting addon gcp-auth=true in "addons-648158"
I0912 21:45:55.489262 1595550 host.go:66] Checking if "addons-648158" exists ...
I0912 21:45:55.489752 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
I0912 21:45:55.514559 1595550 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0912 21:45:55.514682 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
I0912 21:45:55.540902 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
I0912 21:45:56.657675 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.479943666s)
I0912 21:45:56.657705 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.331198572s)
I0912 21:45:56.657675 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.580558405s)
I0912 21:45:56.657693 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.440482256s)
I0912 21:45:56.657789 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.587831708s)
I0912 21:45:56.657799 1595550 addons.go:475] Verifying addon ingress=true in "addons-648158"
I0912 21:45:56.660585 1595550 out.go:177] * Verifying ingress addon...
I0912 21:45:56.664640 1595550 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0912 21:45:56.671855 1595550 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0912 21:45:56.671889 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:57.168903 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:57.669566 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:57.689481 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:45:58.202852 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:58.678169 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:59.202859 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.684152578s)
I0912 21:45:59.203160 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.146215827s)
I0912 21:45:59.203180 1595550 addons.go:475] Verifying addon registry=true in "addons-648158"
I0912 21:45:59.203607 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.770300306s)
I0912 21:45:59.203683 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.862785214s)
I0912 21:45:59.203710 1595550 addons.go:475] Verifying addon metrics-server=true in "addons-648158"
I0912 21:45:59.203763 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.394995568s)
W0912 21:45:59.203808 1595550 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0912 21:45:59.203894 1595550 retry.go:31] will retry after 306.935082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0912 21:45:59.203864 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.949532392s)
I0912 21:45:59.204038 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.819856592s)
I0912 21:45:59.206074 1595550 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-648158 service yakd-dashboard -n yakd-dashboard
I0912 21:45:59.206163 1595550 out.go:177] * Verifying registry addon...
I0912 21:45:59.211064 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0912 21:45:59.308940 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:59.310302 1595550 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0912 21:45:59.310327 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:45:59.512001 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0912 21:45:59.684407 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:45:59.696641 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:45:59.779807 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:00.213565 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:00.222293 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:00.251848 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.185415255s)
I0912 21:46:00.251892 1595550 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-648158"
I0912 21:46:00.252133 1595550 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.737503729s)
I0912 21:46:00.259459 1595550 out.go:177] * Verifying csi-hostpath-driver addon...
I0912 21:46:00.259552 1595550 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0912 21:46:00.263836 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0912 21:46:00.282688 1595550 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0912 21:46:00.284894 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0912 21:46:00.284930 1595550 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0912 21:46:00.333711 1595550 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0912 21:46:00.333740 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:00.450555 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0912 21:46:00.450580 1595550 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0912 21:46:00.573429 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0912 21:46:00.573451 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0912 21:46:00.616464 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0912 21:46:00.672721 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:00.715671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:00.769596 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:01.169508 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:01.216973 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:01.270322 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:01.670164 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:01.716696 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:01.769071 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:02.170852 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:02.187346 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:02.275283 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:02.276738 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:02.300908 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.788856331s)
I0912 21:46:02.342769 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.726219025s)
I0912 21:46:02.345671 1595550 addons.go:475] Verifying addon gcp-auth=true in "addons-648158"
I0912 21:46:02.348084 1595550 out.go:177] * Verifying gcp-auth addon...
I0912 21:46:02.351170 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0912 21:46:02.370943 1595550 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0912 21:46:02.669230 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:02.714852 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:02.768634 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:03.169782 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:03.214599 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:03.268243 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:03.669759 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:03.715188 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:03.769370 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:04.171350 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:04.191783 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:04.214953 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:04.269214 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:04.668946 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:04.715207 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:04.770776 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:05.169262 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:05.215206 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:05.268593 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:05.668641 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:05.715425 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:05.770249 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:06.170386 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:06.215629 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:06.269355 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:06.668800 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:06.688037 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:06.714972 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:06.769475 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:07.169325 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:07.215212 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:07.269233 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:07.672036 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:07.715685 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:07.774420 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:08.170032 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:08.215564 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:08.268314 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:08.669618 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:08.714796 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:08.768650 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:09.169973 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:09.187926 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:09.215863 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:09.268950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:09.669620 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:09.716013 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:09.769003 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:10.169937 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:10.215067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:10.269048 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:10.668821 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:10.714410 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:10.773115 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:11.168809 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:11.215230 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:11.268765 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:11.670789 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:11.685972 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:11.716113 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:11.768763 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:12.168851 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:12.215987 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:12.270347 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:12.668949 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:12.715881 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:12.768439 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:13.169333 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:13.215069 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:13.268589 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:13.668890 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:13.686843 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:13.715472 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:13.768844 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:14.170019 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:14.215665 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:14.269288 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:14.668999 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:14.714473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:14.769112 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:15.168559 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:15.215413 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:15.269984 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:15.696658 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:15.699239 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:15.739014 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:15.779270 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:16.172338 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:16.215279 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:16.269880 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:16.670206 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:16.715845 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:16.772154 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:17.169651 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:17.215651 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:17.268489 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:17.669469 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:17.714751 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:17.769087 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:18.171159 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:18.187735 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:18.215761 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:18.269911 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:18.670327 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:18.715637 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:18.768599 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:19.169726 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:19.215066 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:19.269796 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:19.669586 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:19.715135 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:19.769337 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:20.169970 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:20.215864 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:20.268771 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:20.670646 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:20.688145 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:20.714900 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:20.768814 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:21.169495 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:21.215329 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:21.269606 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:21.669579 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:21.714912 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:21.769067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:22.170696 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:22.214950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:22.269303 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:22.671169 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:22.691475 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:22.715487 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:22.770320 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:23.169651 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:23.215415 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0912 21:46:23.269524 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:23.670074 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:23.715521 1595550 kapi.go:107] duration metric: took 24.504452603s to wait for kubernetes.io/minikube-addons=registry ...
I0912 21:46:23.768357 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:24.169330 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:24.269176 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:24.674985 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:24.769813 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:25.188522 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:25.191069 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:25.269946 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:25.670013 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:25.772104 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:26.170038 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:26.268272 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:26.669085 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:26.768080 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:27.169270 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:27.269396 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:27.669466 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:27.686225 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
I0912 21:46:27.770233 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:28.169340 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:28.268743 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:28.669926 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:28.686206 1595550 pod_ready.go:93] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:28.686235 1595550 pod_ready.go:82] duration metric: took 39.506350936s for pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.686246 1595550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.690071 1595550 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hrb9k" not found
I0912 21:46:28.690101 1595550 pod_ready.go:82] duration metric: took 3.847695ms for pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace to be "Ready" ...
E0912 21:46:28.690113 1595550 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hrb9k" not found
I0912 21:46:28.690121 1595550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.702606 1595550 pod_ready.go:93] pod "etcd-addons-648158" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:28.702636 1595550 pod_ready.go:82] duration metric: took 12.507966ms for pod "etcd-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.702650 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.713587 1595550 pod_ready.go:93] pod "kube-apiserver-addons-648158" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:28.713615 1595550 pod_ready.go:82] duration metric: took 10.95625ms for pod "kube-apiserver-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.713627 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.723742 1595550 pod_ready.go:93] pod "kube-controller-manager-addons-648158" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:28.723766 1595550 pod_ready.go:82] duration metric: took 10.131851ms for pod "kube-controller-manager-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.723781 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q549p" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.768636 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:28.883832 1595550 pod_ready.go:93] pod "kube-proxy-q549p" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:28.883860 1595550 pod_ready.go:82] duration metric: took 160.070713ms for pod "kube-proxy-q549p" in "kube-system" namespace to be "Ready" ...
I0912 21:46:28.883873 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:29.169634 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:29.269390 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:29.283773 1595550 pod_ready.go:93] pod "kube-scheduler-addons-648158" in "kube-system" namespace has status "Ready":"True"
I0912 21:46:29.283800 1595550 pod_ready.go:82] duration metric: took 399.918515ms for pod "kube-scheduler-addons-648158" in "kube-system" namespace to be "Ready" ...
I0912 21:46:29.283811 1595550 pod_ready.go:39] duration metric: took 40.113871028s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0912 21:46:29.283829 1595550 api_server.go:52] waiting for apiserver process to appear ...
I0912 21:46:29.283892 1595550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0912 21:46:29.302139 1595550 api_server.go:72] duration metric: took 42.665131026s to wait for apiserver process to appear ...
I0912 21:46:29.302218 1595550 api_server.go:88] waiting for apiserver healthz status ...
I0912 21:46:29.302253 1595550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0912 21:46:29.310840 1595550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0912 21:46:29.311947 1595550 api_server.go:141] control plane version: v1.31.1
I0912 21:46:29.311968 1595550 api_server.go:131] duration metric: took 9.730679ms to wait for apiserver health ...
I0912 21:46:29.311978 1595550 system_pods.go:43] waiting for kube-system pods to appear ...
I0912 21:46:29.490585 1595550 system_pods.go:59] 17 kube-system pods found
I0912 21:46:29.490625 1595550 system_pods.go:61] "coredns-7c65d6cfc9-g2jtl" [b45d3244-e501-473d-a897-230dc34f1077] Running
I0912 21:46:29.490635 1595550 system_pods.go:61] "csi-hostpath-attacher-0" [92ad0877-963f-48cf-9780-3322b096d442] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0912 21:46:29.490642 1595550 system_pods.go:61] "csi-hostpath-resizer-0" [c84263fa-c16e-4996-9c7d-4cd592123beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0912 21:46:29.490652 1595550 system_pods.go:61] "csi-hostpathplugin-whsg5" [ce162d67-971a-4cda-bdab-18421fb38423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0912 21:46:29.490658 1595550 system_pods.go:61] "etcd-addons-648158" [9423960f-eaef-4887-b1a1-d85a94bdcf6b] Running
I0912 21:46:29.490670 1595550 system_pods.go:61] "kube-apiserver-addons-648158" [5be520a4-b7d1-4092-b3c7-9763ac147461] Running
I0912 21:46:29.490675 1595550 system_pods.go:61] "kube-controller-manager-addons-648158" [1e51a3ca-0473-4cb7-a8cf-e7ce80c5b580] Running
I0912 21:46:29.490679 1595550 system_pods.go:61] "kube-ingress-dns-minikube" [d0dae086-4398-437b-b5b6-17b722bf7b0b] Running
I0912 21:46:29.490686 1595550 system_pods.go:61] "kube-proxy-q549p" [1d5423b4-56c7-4981-a867-72374a2f1f7b] Running
I0912 21:46:29.490690 1595550 system_pods.go:61] "kube-scheduler-addons-648158" [2491d207-d29b-4008-93bd-ac17186459f5] Running
I0912 21:46:29.490696 1595550 system_pods.go:61] "metrics-server-84c5f94fbc-k2dzp" [eb6c8928-90e8-498f-9bc2-1e0d328da8dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0912 21:46:29.490708 1595550 system_pods.go:61] "nvidia-device-plugin-daemonset-z4pwc" [c1e3c33a-ac28-4943-aad4-27c2cbb14eef] Running
I0912 21:46:29.490713 1595550 system_pods.go:61] "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
I0912 21:46:29.490717 1595550 system_pods.go:61] "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
I0912 21:46:29.490726 1595550 system_pods.go:61] "snapshot-controller-56fcc65765-qh9vh" [eedbd380-8ebf-4ee3-a5f8-b988ea320828] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0912 21:46:29.490739 1595550 system_pods.go:61] "snapshot-controller-56fcc65765-wh5dd" [836602c0-e62c-4016-8973-eba07bf5ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0912 21:46:29.490744 1595550 system_pods.go:61] "storage-provisioner" [7f4fc819-9ab7-484b-97c4-d3f1243ced5f] Running
I0912 21:46:29.490751 1595550 system_pods.go:74] duration metric: took 178.767061ms to wait for pod list to return data ...
I0912 21:46:29.490762 1595550 default_sa.go:34] waiting for default service account to be created ...
I0912 21:46:29.670975 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:29.689860 1595550 default_sa.go:45] found service account: "default"
I0912 21:46:29.689896 1595550 default_sa.go:55] duration metric: took 199.127089ms for default service account to be created ...
I0912 21:46:29.689907 1595550 system_pods.go:116] waiting for k8s-apps to be running ...
I0912 21:46:29.772550 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:29.897042 1595550 system_pods.go:86] 17 kube-system pods found
I0912 21:46:29.897130 1595550 system_pods.go:89] "coredns-7c65d6cfc9-g2jtl" [b45d3244-e501-473d-a897-230dc34f1077] Running
I0912 21:46:29.897167 1595550 system_pods.go:89] "csi-hostpath-attacher-0" [92ad0877-963f-48cf-9780-3322b096d442] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0912 21:46:29.897196 1595550 system_pods.go:89] "csi-hostpath-resizer-0" [c84263fa-c16e-4996-9c7d-4cd592123beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0912 21:46:29.897271 1595550 system_pods.go:89] "csi-hostpathplugin-whsg5" [ce162d67-971a-4cda-bdab-18421fb38423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0912 21:46:29.897305 1595550 system_pods.go:89] "etcd-addons-648158" [9423960f-eaef-4887-b1a1-d85a94bdcf6b] Running
I0912 21:46:29.897336 1595550 system_pods.go:89] "kube-apiserver-addons-648158" [5be520a4-b7d1-4092-b3c7-9763ac147461] Running
I0912 21:46:29.897363 1595550 system_pods.go:89] "kube-controller-manager-addons-648158" [1e51a3ca-0473-4cb7-a8cf-e7ce80c5b580] Running
I0912 21:46:29.897393 1595550 system_pods.go:89] "kube-ingress-dns-minikube" [d0dae086-4398-437b-b5b6-17b722bf7b0b] Running
I0912 21:46:29.897432 1595550 system_pods.go:89] "kube-proxy-q549p" [1d5423b4-56c7-4981-a867-72374a2f1f7b] Running
I0912 21:46:29.897459 1595550 system_pods.go:89] "kube-scheduler-addons-648158" [2491d207-d29b-4008-93bd-ac17186459f5] Running
I0912 21:46:29.897487 1595550 system_pods.go:89] "metrics-server-84c5f94fbc-k2dzp" [eb6c8928-90e8-498f-9bc2-1e0d328da8dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0912 21:46:29.897515 1595550 system_pods.go:89] "nvidia-device-plugin-daemonset-z4pwc" [c1e3c33a-ac28-4943-aad4-27c2cbb14eef] Running
I0912 21:46:29.897541 1595550 system_pods.go:89] "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
I0912 21:46:29.897573 1595550 system_pods.go:89] "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
I0912 21:46:29.897610 1595550 system_pods.go:89] "snapshot-controller-56fcc65765-qh9vh" [eedbd380-8ebf-4ee3-a5f8-b988ea320828] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0912 21:46:29.897644 1595550 system_pods.go:89] "snapshot-controller-56fcc65765-wh5dd" [836602c0-e62c-4016-8973-eba07bf5ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0912 21:46:29.897674 1595550 system_pods.go:89] "storage-provisioner" [7f4fc819-9ab7-484b-97c4-d3f1243ced5f] Running
I0912 21:46:29.897713 1595550 system_pods.go:126] duration metric: took 207.793843ms to wait for k8s-apps to be running ...
I0912 21:46:29.897740 1595550 system_svc.go:44] waiting for kubelet service to be running ....
I0912 21:46:29.897844 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0912 21:46:29.922841 1595550 system_svc.go:56] duration metric: took 25.092026ms WaitForService to wait for kubelet
I0912 21:46:29.922927 1595550 kubeadm.go:582] duration metric: took 43.285916696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0912 21:46:29.922971 1595550 node_conditions.go:102] verifying NodePressure condition ...
I0912 21:46:30.084587 1595550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0912 21:46:30.084674 1595550 node_conditions.go:123] node cpu capacity is 2
I0912 21:46:30.084704 1595550 node_conditions.go:105] duration metric: took 161.694723ms to run NodePressure ...
I0912 21:46:30.084733 1595550 start.go:241] waiting for startup goroutines ...
I0912 21:46:30.177392 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:30.270044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:30.672081 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:30.772336 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:31.170354 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:31.270100 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:31.687457 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:31.769263 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:32.171212 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:32.269264 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:32.668925 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:32.769950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:33.170105 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:33.270003 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:33.675988 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:33.775921 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:34.169756 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:34.269518 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:34.669476 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:34.768975 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:35.169966 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:35.271798 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:35.670794 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:35.773687 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:36.169449 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:36.269313 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:36.668836 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:36.768716 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:37.170134 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:37.269427 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:37.670977 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:37.771877 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:38.170762 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:38.270007 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:38.669148 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:38.768671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:39.250046 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:39.268893 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:39.670016 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:39.769514 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:40.169605 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:40.271111 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:40.670718 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:40.771886 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:41.169676 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:41.269242 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:41.669799 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:41.768693 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:42.170204 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:42.273999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:42.675722 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:42.776128 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:43.169292 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:43.269671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:43.669761 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:43.769867 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:44.169699 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:44.269311 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:44.669778 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:44.768538 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:45.170264 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:45.271004 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:45.671118 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:45.771999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:46.176835 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:46.268304 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:46.670382 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:46.769207 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:47.168890 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:47.271789 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:47.669808 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:47.771187 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:48.169709 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:48.269244 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:48.670127 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:48.769736 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:49.170207 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:49.274740 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:49.668716 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:49.769341 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:50.170346 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:50.269124 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:50.670320 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:50.770415 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:51.168992 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:51.268625 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:51.669567 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:51.769647 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:52.170262 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:52.269562 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:52.670047 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:52.771392 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:53.169868 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:53.268340 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:53.669875 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:53.768254 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:54.169183 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:54.268473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:54.668916 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:54.768248 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:55.169373 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:55.268860 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0912 21:46:55.674665 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:55.772168 1595550 kapi.go:107] duration metric: took 55.508329256s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0912 21:46:56.168626 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:56.669345 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:57.169640 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:57.668824 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:58.169827 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:58.670247 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:59.169501 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:46:59.669083 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:00.179504 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:00.668846 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:01.170175 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:01.669862 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:02.174585 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:02.670221 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:03.169516 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:03.668719 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:04.169234 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:04.669589 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:05.169340 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:05.669639 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:06.171075 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:06.679265 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0912 21:47:07.170475 1595550 kapi.go:107] duration metric: took 1m10.505832645s to wait for app.kubernetes.io/name=ingress-nginx ...
I0912 21:47:24.375785 1595550 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0912 21:47:24.375814 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:24.855511 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:25.355191 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:25.854314 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:26.355546 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:26.854528 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:27.354522 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:27.855704 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:28.354999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:28.855221 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:29.355090 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:29.855055 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:30.355044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:30.854780 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:31.355382 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:31.855273 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:32.355372 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:32.855481 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:33.354675 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:33.854478 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:34.355683 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:34.854707 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:35.355032 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:35.855433 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:36.355693 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:36.855712 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:37.354454 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:37.854918 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:38.354931 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:38.855069 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:39.354493 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:39.855774 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:40.355558 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:40.855159 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:41.355234 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:41.854676 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:42.356044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:42.855065 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:43.355012 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:43.855880 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:44.354711 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:44.855135 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:45.355121 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:45.855508 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:46.355130 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:46.855067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:47.354539 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:47.855149 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:48.354588 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:48.856191 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:49.355081 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:49.858474 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:50.355061 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:50.854619 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:51.355672 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:51.855052 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:52.355214 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:52.855525 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:53.355486 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:53.854574 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:54.355201 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:54.855173 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:55.354895 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:55.854555 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:56.355592 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:56.854809 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:57.354828 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:57.855413 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:58.355473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:58.855827 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:59.354836 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:47:59.854666 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:00.355171 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:00.856926 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:01.355634 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:01.855736 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:02.354713 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:02.855643 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:03.354094 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:03.854174 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:04.355175 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:04.854963 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:05.360000 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:05.854708 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:06.354658 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:06.854637 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:07.354953 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:07.854665 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:08.355563 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:08.855894 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:09.354869 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:09.855046 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:10.355048 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:10.854826 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:11.354406 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:11.855368 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:12.354795 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:12.854521 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:13.355278 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:13.854774 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:14.354510 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:14.857767 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:15.354685 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:15.854798 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:16.355569 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:16.854826 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:17.355266 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:17.855337 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:18.355179 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:18.855374 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:19.355044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:19.854927 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:20.355021 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:20.857371 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:21.355150 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:21.855400 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:22.355534 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:22.855568 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:23.355759 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:23.855025 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:24.355093 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:24.856222 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:25.354230 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:25.854071 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:26.354616 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:26.855197 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:27.354253 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:27.854198 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:28.354876 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:28.855446 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:29.354987 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:29.854872 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:30.354559 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:30.855325 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:31.355470 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:31.855422 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:32.354965 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:32.856295 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0912 21:48:33.354854 1595550 kapi.go:107] duration metric: took 2m31.003674471s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0912 21:48:33.356435 1595550 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-648158 cluster.
I0912 21:48:33.357906 1595550 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0912 21:48:33.359309 1595550 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0912 21:48:33.361304 1595550 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0912 21:48:33.363157 1595550 addons.go:510] duration metric: took 2m46.725785485s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0912 21:48:33.363199 1595550 start.go:246] waiting for cluster config update ...
I0912 21:48:33.363219 1595550 start.go:255] writing updated cluster config ...
I0912 21:48:33.363498 1595550 ssh_runner.go:195] Run: rm -f paused
I0912 21:48:33.730603 1595550 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
I0912 21:48:33.732519 1595550 out.go:177] * Done! kubectl is now configured to use "addons-648158" cluster and "default" namespace by default
==> Docker <==
Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.782591463Z" level=info msg="ignoring event" container=17a44cacfbdcb2f4ec16ac1bf1dcfc202467929c1d53afeecd8d3fc6f4329b5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.820549395Z" level=info msg="ignoring event" container=10b338530329858db82eae6608813035442f486a25ace047bf323992ccd5e39d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.900428538Z" level=info msg="ignoring event" container=5e4215422d2375aec0a0381ed2e145e7051002151e207a231712ee007a40e95b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:57:59 addons-648158 dockerd[1289]: time="2024-09-12T21:57:59.009891730Z" level=info msg="ignoring event" container=e1d325b34efd04a0dc1da7c980c08b49f42c84a2c254adee5bd132b45ff92198 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:57:59 addons-648158 dockerd[1289]: time="2024-09-12T21:57:59.047137752Z" level=info msg="ignoring event" container=95286393d5350846545c3a350994a74832db429cea42d0ee5c1dcd436adbe57b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="error getting RW layer size for container ID '07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742': Error response from daemon: No such container: 07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742"
Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742'"
Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="error getting RW layer size for container ID '2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33': Error response from daemon: No such container: 2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33"
Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33'"
Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.280187502Z" level=info msg="ignoring event" container=56cfe2f7a61d9ba8c2457162a002eaf57ffa0da44da9e584e78e73d699f79024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.302788044Z" level=info msg="ignoring event" container=51f3820127ef98ae747f2b9a2b9ec5ce0521ef2308e47e9a6bd767a18126b35d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.481644842Z" level=info msg="ignoring event" container=878b88ec8c15b6a9c0b0ba5d71fc0a05ac2329978113749d52b8ffb8b0c435d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.506173363Z" level=info msg="ignoring event" container=5211cd8d531ceca7f3224ae4eb482d14a5ad3708be1bdc75d686bb96a9a46903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:09 addons-648158 dockerd[1289]: time="2024-09-12T21:58:09.447780138Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 12 21:58:09 addons-648158 dockerd[1289]: time="2024-09-12T21:58:09.450398710Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 12 21:58:13 addons-648158 dockerd[1289]: time="2024-09-12T21:58:13.093300283Z" level=info msg="ignoring event" container=52adf9fce3141d54f4b7944ff34c9e4932a45c423a7b39753b3252155eb946e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:13 addons-648158 dockerd[1289]: time="2024-09-12T21:58:13.214984652Z" level=info msg="ignoring event" container=d87856d84e8fe7e3e4367d44f238b08d775909aace632b56a45818021458fbde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:18 addons-648158 dockerd[1289]: time="2024-09-12T21:58:18.755570688Z" level=info msg="ignoring event" container=dc56e3507ff917af8589f93b70a47c46a3a4f5a1ce2d37d328bb085263934d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:24 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51674a28bdbb97666e111c4f37326466e4b4466344f77a64f6eb59ecba596213/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 12 21:58:26 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:26Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
Sep 12 21:58:27 addons-648158 dockerd[1289]: time="2024-09-12T21:58:27.666887691Z" level=info msg="ignoring event" container=a2155d553aab7fc161e68225e40ce026fe0d51c360f87c7dc63997bb67fded04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.383682927Z" level=info msg="ignoring event" container=3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.463374531Z" level=info msg="ignoring event" container=672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.588014154Z" level=info msg="ignoring event" container=16d9387dee557367d8e5641c9c0386d812e0e3945335f3d5294a2681ff76c5ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.819239909Z" level=info msg="ignoring event" container=aa6af1ff5693aae2cd14b170cd775ec554d34f4c7ec1db00cb6cfda508dd1b72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2f002ec003e27 nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf 3 seconds ago Running nginx 0 51674a28bdbb9 nginx
5aa737c3896b8 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 19da29b2da55a gcp-auth-89d5ffd79-s7q4h
5d67e1b5df7f7 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 4211f01616c4c ingress-nginx-controller-bc57996ff-696bh
3ee823d52330f registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 211b5fa5c3a03 ingress-nginx-admission-patch-ssbgc
7ea4889619ec7 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 a8e90a2d2f882 ingress-nginx-admission-create-wjqxw
f0864ef4b730b rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 11 minutes ago Running local-path-provisioner 0 896593063d650 local-path-provisioner-86d989889c-xbncw
e374ff2318f0f marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 78452ebde635f yakd-dashboard-67d98fc6b-n5gz7
c347945eead17 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 4178c11db9b3f kube-ingress-dns-minikube
cbe25e6874536 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 6daffe1a9d10c cloud-spanner-emulator-769b77f747-cdpm7
5596da7bfeb4b nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 65021917ba0b0 nvidia-device-plugin-daemonset-z4pwc
046920352d77b ba04bb24b9575 12 minutes ago Running storage-provisioner 0 83a44bd844c96 storage-provisioner
5c398510d84ba 2f6c962e7b831 12 minutes ago Running coredns 0 9546c123c461d coredns-7c65d6cfc9-g2jtl
19937b7e96a03 24a140c548c07 12 minutes ago Running kube-proxy 0 60896dc860310 kube-proxy-q549p
fdf5b03dfd7ab 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 5f8f5a06066b8 kube-scheduler-addons-648158
229fc23ce1858 27e3830e14027 12 minutes ago Running etcd 0 0a9495f60249b etcd-addons-648158
2721a50c3ab1c 279f381cb3736 12 minutes ago Running kube-controller-manager 0 d87f312af29b3 kube-controller-manager-addons-648158
32402b3960159 d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 5d29da3ec3995 kube-apiserver-addons-648158
==> controller_ingress [5d67e1b5df7f] <==
I0912 21:47:07.397311 8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c33493ec-51e6-4ab9-a543-52417f292017", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0912 21:47:07.399377 8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"dd562a3d-e902-4326-8410-bd0d19832ac6", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0912 21:47:07.399550 8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c6893e2b-37f5-4831-8e2a-92bee50dee61", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0912 21:47:08.586389 8 nginx.go:317] "Starting NGINX process"
I0912 21:47:08.586623 8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0912 21:47:08.587092 8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0912 21:47:08.587320 8 controller.go:193] "Configuration changes detected, backend reload required"
I0912 21:47:08.615250 8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0912 21:47:08.615430 8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-696bh"
I0912 21:47:08.627030 8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-696bh" node="addons-648158"
I0912 21:47:08.650858 8 controller.go:213] "Backend successfully reloaded"
I0912 21:47:08.651078 8 controller.go:224] "Initial sync, sleeping for 1 second"
I0912 21:47:08.651194 8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0912 21:58:24.195943 8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0912 21:58:24.216840 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.021s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.021s testedConfigurationSize:18.1kB}
I0912 21:58:24.216879 8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
I0912 21:58:24.223362 8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
W0912 21:58:24.223690 8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0912 21:58:24.223762 8 controller.go:193] "Configuration changes detected, backend reload required"
I0912 21:58:24.225362 8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"c6fb3b37-278e-4c23-ae6b-a1cab589f6d6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2778", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0912 21:58:24.288725 8 controller.go:213] "Backend successfully reloaded"
I0912 21:58:24.289148 8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0912 21:58:27.557462 8 controller.go:193] "Configuration changes detected, backend reload required"
I0912 21:58:27.604298 8 controller.go:213] "Backend successfully reloaded"
I0912 21:58:27.604600 8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [5c398510d84b] <==
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
[INFO] Reloading complete
[INFO] 127.0.0.1:58409 - 24075 "HINFO IN 8776178255420352184.1321134477801143313. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01465829s
[INFO] 10.244.0.7:54519 - 13842 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319804s
[INFO] 10.244.0.7:54519 - 46870 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122418s
[INFO] 10.244.0.7:51322 - 1341 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169342s
[INFO] 10.244.0.7:51322 - 41784 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129925s
[INFO] 10.244.0.7:38911 - 28253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109938s
[INFO] 10.244.0.7:38911 - 51547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000224643s
[INFO] 10.244.0.7:37612 - 54543 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093709s
[INFO] 10.244.0.7:37612 - 40201 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088589s
[INFO] 10.244.0.7:59467 - 33407 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002662648s
[INFO] 10.244.0.7:59467 - 33282 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002996769s
[INFO] 10.244.0.7:55106 - 59925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125864s
[INFO] 10.244.0.7:55106 - 39959 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071785s
[INFO] 10.244.0.25:43052 - 3287 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00048236s
[INFO] 10.244.0.25:36270 - 1884 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00025981s
[INFO] 10.244.0.25:45228 - 57249 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306848s
[INFO] 10.244.0.25:48284 - 58032 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000492058s
[INFO] 10.244.0.25:39213 - 6518 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000269835s
[INFO] 10.244.0.25:48722 - 48718 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000268343s
[INFO] 10.244.0.25:40526 - 5209 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002962407s
[INFO] 10.244.0.25:33228 - 32375 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003231603s
[INFO] 10.244.0.25:50767 - 1726 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002239782s
[INFO] 10.244.0.25:55250 - 34195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002946104s
==> describe nodes <==
Name: addons-648158
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-648158
kubernetes.io/os=linux
minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
minikube.k8s.io/name=addons-648158
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_12T21_45_41_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-648158
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 12 Sep 2024 21:45:38 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-648158
AcquireTime: <unset>
RenewTime: Thu, 12 Sep 2024 21:58:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 12 Sep 2024 21:54:21 +0000 Thu, 12 Sep 2024 21:45:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 12 Sep 2024 21:54:21 +0000 Thu, 12 Sep 2024 21:45:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 12 Sep 2024 21:54:21 +0000 Thu, 12 Sep 2024 21:45:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 12 Sep 2024 21:54:21 +0000 Thu, 12 Sep 2024 21:45:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-648158
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 7cd4665a5ebe4de7b1de26fc1d9805e5
System UUID: 1b17f2fa-7c9a-437f-82b1-3b9942bbda88
Boot ID: f14c6faf-727c-4a6f-be07-d8fb37c7dc91
Kernel Version: 5.15.0-1068-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m15s
default cloud-spanner-emulator-769b77f747-cdpm7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5s
gcp-auth gcp-auth-89d5ffd79-s7q4h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-696bh 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-g2jtl 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-648158 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-648158 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-648158 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-q549p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-648158 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-z4pwc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-xbncw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-n5gz7 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 388Mi (4%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-648158 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-648158 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-648158 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-648158 event: Registered Node addons-648158 in Controller
==> dmesg <==
[Sep12 21:14] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Sep12 21:18] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [229fc23ce185] <==
{"level":"info","ts":"2024-09-12T21:45:35.351401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-09-12T21:45:35.351476Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-09-12T21:45:36.325063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-12T21:45:36.325164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-12T21:45:36.325246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-12T21:45:36.325297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-12T21:45:36.325346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-12T21:45:36.325396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-12T21:45:36.325425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-12T21:45:36.328713Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-12T21:45:36.333180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-648158 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-12T21:45:36.333413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-12T21:45:36.333788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-12T21:45:36.333991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-12T21:45:36.334038Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-12T21:45:36.335165Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-12T21:45:36.336122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-12T21:45:36.341105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-12T21:45:36.341236Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-12T21:45:36.341313Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-12T21:45:36.342105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-12T21:45:36.343074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-12T21:55:36.488078Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1859}
{"level":"info","ts":"2024-09-12T21:55:36.551492Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1859,"took":"62.401119ms","hash":3646322125,"current-db-size-bytes":9003008,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4952064,"current-db-size-in-use":"5.0 MB"}
{"level":"info","ts":"2024-09-12T21:55:36.551548Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3646322125,"revision":1859,"compact-revision":-1}
==> gcp-auth [5aa737c3896b] <==
2024/09/12 21:48:32 GCP Auth Webhook started!
2024/09/12 21:48:50 Ready to marshal response ...
2024/09/12 21:48:50 Ready to write response ...
2024/09/12 21:48:50 Ready to marshal response ...
2024/09/12 21:48:50 Ready to write response ...
2024/09/12 21:49:13 Ready to marshal response ...
2024/09/12 21:49:13 Ready to write response ...
2024/09/12 21:49:14 Ready to marshal response ...
2024/09/12 21:49:14 Ready to write response ...
2024/09/12 21:49:14 Ready to marshal response ...
2024/09/12 21:49:14 Ready to write response ...
2024/09/12 21:57:24 Ready to marshal response ...
2024/09/12 21:57:24 Ready to write response ...
2024/09/12 21:57:27 Ready to marshal response ...
2024/09/12 21:57:27 Ready to write response ...
2024/09/12 21:57:49 Ready to marshal response ...
2024/09/12 21:57:49 Ready to write response ...
2024/09/12 21:58:24 Ready to marshal response ...
2024/09/12 21:58:24 Ready to write response ...
==> kernel <==
21:58:29 up 6:40, 0 users, load average: 1.53, 1.02, 1.78
Linux addons-648158 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [32402b396015] <==
W0912 21:49:05.789737 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0912 21:49:05.832384 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0912 21:49:05.867941 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0912 21:49:06.262602 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0912 21:49:06.440846 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0912 21:57:32.801324 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E0912 21:57:34.964620 1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
E0912 21:57:57.414726 1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
I0912 21:58:04.976109 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0912 21:58:04.976159 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0912 21:58:05.006191 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0912 21:58:05.006260 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0912 21:58:05.015169 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0912 21:58:05.015236 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0912 21:58:05.041792 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0912 21:58:05.041839 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0912 21:58:05.195212 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0912 21:58:05.195257 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0912 21:58:06.016796 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0912 21:58:06.195662 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0912 21:58:06.223587 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0912 21:58:18.643675 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0912 21:58:19.669334 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0912 21:58:24.217892 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0912 21:58:24.543092 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.231.111"}
==> kube-controller-manager [2721a50c3ab1] <==
I0912 21:58:16.144780 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0912 21:58:16.144825 1 shared_informer.go:320] Caches are synced for resource quota
I0912 21:58:16.403035 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0912 21:58:16.403088 1 shared_informer.go:320] Caches are synced for garbage collector
W0912 21:58:16.786856 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:16.786986 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:17.660020 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:17.660064 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0912 21:58:19.670948 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:21.188758 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:21.188798 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:22.253411 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:22.253453 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:22.890316 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:22.890357 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:24.156110 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:24.156160 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:27.590408 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:27.590469 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0912 21:58:27.673783 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:27.673834 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0912 21:58:28.282551 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.104µs"
I0912 21:58:28.755736 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
W0912 21:58:29.821979 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0912 21:58:29.822018 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [19937b7e96a0] <==
I0912 21:45:47.952842 1 server_linux.go:66] "Using iptables proxy"
I0912 21:45:48.094838 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0912 21:45:48.094915 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0912 21:45:48.137889 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0912 21:45:48.137978 1 server_linux.go:169] "Using iptables Proxier"
I0912 21:45:48.140113 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0912 21:45:48.140413 1 server.go:483] "Version info" version="v1.31.1"
I0912 21:45:48.140427 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0912 21:45:48.141815 1 config.go:199] "Starting service config controller"
I0912 21:45:48.141856 1 shared_informer.go:313] Waiting for caches to sync for service config
I0912 21:45:48.141885 1 config.go:105] "Starting endpoint slice config controller"
I0912 21:45:48.141889 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0912 21:45:48.145003 1 config.go:328] "Starting node config controller"
I0912 21:45:48.145045 1 shared_informer.go:313] Waiting for caches to sync for node config
I0912 21:45:48.242327 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0912 21:45:48.242401 1 shared_informer.go:320] Caches are synced for service config
I0912 21:45:48.245969 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [fdf5b03dfd7a] <==
E0912 21:45:38.855250 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0912 21:45:38.854121 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0912 21:45:38.855435 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0912 21:45:38.854193 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0912 21:45:38.855632 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0912 21:45:38.854237 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0912 21:45:38.855846 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0912 21:45:38.854283 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0912 21:45:38.856027 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0912 21:45:38.856167 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.685057 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0912 21:45:39.685294 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.686564 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0912 21:45:39.686757 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.688821 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0912 21:45:39.688851 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.697690 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0912 21:45:39.697734 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.732230 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0912 21:45:39.732510 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0912 21:45:39.777106 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0912 21:45:39.777147 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0912 21:45:39.821395 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0912 21:45:39.821438 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0912 21:45:42.622076 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 12 21:58:26 addons-648158 kubelet[2335]: E0912 21:58:26.181243 2335 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4e96a45b-06bb-4568-9f4e-c7824346aa4d"
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.477491 2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=1.817970224 podStartE2EDuration="3.477468367s" podCreationTimestamp="2024-09-12 21:58:24 +0000 UTC" firstStartedPulling="2024-09-12 21:58:25.033560295 +0000 UTC m=+764.020565768" lastFinishedPulling="2024-09-12 21:58:26.693058437 +0000 UTC m=+765.680063911" observedRunningTime="2024-09-12 21:58:27.065886935 +0000 UTC m=+766.052892426" watchObservedRunningTime="2024-09-12 21:58:27.477468367 +0000 UTC m=+766.464473858"
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.824672 2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds\") pod \"f6bb46dc-e671-4c74-b57f-680c90cfb909\" (UID: \"f6bb46dc-e671-4c74-b57f-680c90cfb909\") "
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.824727 2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtx86\" (UniqueName: \"kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86\") pod \"f6bb46dc-e671-4c74-b57f-680c90cfb909\" (UID: \"f6bb46dc-e671-4c74-b57f-680c90cfb909\") "
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.825119 2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f6bb46dc-e671-4c74-b57f-680c90cfb909" (UID: "f6bb46dc-e671-4c74-b57f-680c90cfb909"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.844761 2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86" (OuterVolumeSpecName: "kube-api-access-qtx86") pod "f6bb46dc-e671-4c74-b57f-680c90cfb909" (UID: "f6bb46dc-e671-4c74-b57f-680c90cfb909"). InnerVolumeSpecName "kube-api-access-qtx86". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.925283 2335 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds\") on node \"addons-648158\" DevicePath \"\""
Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.925320 2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qtx86\" (UniqueName: \"kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86\") on node \"addons-648158\" DevicePath \"\""
Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.835870 2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4gc\" (UniqueName: \"kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc\") pod \"4a976b45-4ffe-45bb-bf8e-8235e03fda10\" (UID: \"4a976b45-4ffe-45bb-bf8e-8235e03fda10\") "
Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.838614 2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc" (OuterVolumeSpecName: "kube-api-access-rj4gc") pod "4a976b45-4ffe-45bb-bf8e-8235e03fda10" (UID: "4a976b45-4ffe-45bb-bf8e-8235e03fda10"). InnerVolumeSpecName "kube-api-access-rj4gc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.936948 2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rj4gc\" (UniqueName: \"kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc\") on node \"addons-648158\" DevicePath \"\""
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.037680 2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgcfm\" (UniqueName: \"kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm\") pod \"ee258d2f-09b0-4915-82e1-123bba604752\" (UID: \"ee258d2f-09b0-4915-82e1-123bba604752\") "
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.039800 2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm" (OuterVolumeSpecName: "kube-api-access-lgcfm") pod "ee258d2f-09b0-4915-82e1-123bba604752" (UID: "ee258d2f-09b0-4915-82e1-123bba604752"). InnerVolumeSpecName "kube-api-access-lgcfm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.102781 2335 scope.go:117] "RemoveContainer" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.139708 2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lgcfm\" (UniqueName: \"kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm\") on node \"addons-648158\" DevicePath \"\""
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.163249 2335 scope.go:117] "RemoveContainer" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
Sep 12 21:58:29 addons-648158 kubelet[2335]: E0912 21:58:29.166068 2335 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.166148 2335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"} err="failed to get container status \"672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10\": rpc error: code = Unknown desc = Error response from daemon: No such container: 672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.166176 2335 scope.go:117] "RemoveContainer" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.197897 2335 scope.go:117] "RemoveContainer" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
Sep 12 21:58:29 addons-648158 kubelet[2335]: E0912 21:58:29.199161 2335 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.199306 2335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"} err="failed to get container status \"3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.205655 2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a976b45-4ffe-45bb-bf8e-8235e03fda10" path="/var/lib/kubelet/pods/4a976b45-4ffe-45bb-bf8e-8235e03fda10/volumes"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.207571 2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee258d2f-09b0-4915-82e1-123bba604752" path="/var/lib/kubelet/pods/ee258d2f-09b0-4915-82e1-123bba604752/volumes"
Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.210653 2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bb46dc-e671-4c74-b57f-680c90cfb909" path="/var/lib/kubelet/pods/f6bb46dc-e671-4c74-b57f-680c90cfb909/volumes"
==> storage-provisioner [046920352d77] <==
I0912 21:45:53.574326 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0912 21:45:53.590499 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0912 21:45:53.590544 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0912 21:45:53.602104 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0912 21:45:53.602444 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-648158_86a231c6-688e-464b-b16e-4dbe50672663!
I0912 21:45:53.603160 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c5fce5c-cd93-4709-b47b-dcd1c6fac236", APIVersion:"v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-648158_86a231c6-688e-464b-b16e-4dbe50672663 became leader
I0912 21:45:53.702814 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-648158_86a231c6-688e-464b-b16e-4dbe50672663!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-648158 -n addons-648158
helpers_test.go:261: (dbg) Run: kubectl --context addons-648158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc: exit status 1 (98.911663ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-648158/192.168.49.2
Start Time: Thu, 12 Sep 2024 21:49:14 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ltfqw (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ltfqw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m16s default-scheduler Successfully assigned default/busybox to addons-648158
Normal Pulling 7m56s (x4 over 9m16s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m56s (x4 over 9m16s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m56s (x4 over 9m16s) kubelet Error: ErrImagePull
Warning Failed 7m28s (x6 over 9m15s) kubelet Error: ImagePullBackOff
Normal BackOff 4m4s (x21 over 9m15s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-wjqxw" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-ssbgc" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.60s)