=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 6.50247ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-442hl" [12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005568487s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l52wr" [24cb7649-58a6-4012-827b-a27d68665a07] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.017024142s
addons_test.go:338: (dbg) Run: kubectl --context addons-829722 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-829722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-829722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.1298444s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-829722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p addons-829722 ip
2024/09/20 18:00:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p addons-829722 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-829722
helpers_test.go:235: (dbg) docker inspect addons-829722:
-- stdout --
[
{
"Id": "c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b",
"Created": "2024-09-20T17:46:44.657657778Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 134229,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-20T17:46:44.822553946Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
"ResolvConfPath": "/var/lib/docker/containers/c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b/hostname",
"HostsPath": "/var/lib/docker/containers/c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b/hosts",
"LogPath": "/var/lib/docker/containers/c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b/c1f655c99f6246d6e48384bb7bfc05b3580c935330d1720c9f67bb339fe5494b-json.log",
"Name": "/addons-829722",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"addons-829722:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-829722",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/bc8df9dcc93c5bd4e4c98386cb9d3bb15c3c72411571dbdb32c9ccb4b63cc8a4-init/diff:/var/lib/docker/overlay2/801f70290e7fe8bce281d02cbd660d7777da11f7746a0acf99f62d07b9865621/diff",
"MergedDir": "/var/lib/docker/overlay2/bc8df9dcc93c5bd4e4c98386cb9d3bb15c3c72411571dbdb32c9ccb4b63cc8a4/merged",
"UpperDir": "/var/lib/docker/overlay2/bc8df9dcc93c5bd4e4c98386cb9d3bb15c3c72411571dbdb32c9ccb4b63cc8a4/diff",
"WorkDir": "/var/lib/docker/overlay2/bc8df9dcc93c5bd4e4c98386cb9d3bb15c3c72411571dbdb32c9ccb4b63cc8a4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-829722",
"Source": "/var/lib/docker/volumes/addons-829722/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-829722",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-829722",
"name.minikube.sigs.k8s.io": "addons-829722",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d1ae0f24ae96c9c449dfd9365c0688ecb04a76a8df07f06ef2a8a82a3d52efb7",
"SandboxKey": "/var/run/docker/netns/d1ae0f24ae96",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32808"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32809"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32812"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32810"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32811"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-829722": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "856a0dc4b092e6e8342636da5c7b6b7b9348cc04728e73cc1bdf6bb4bac7e599",
"EndpointID": "bf10d5cb08ca2eed79aea5a1e7531a49a6a742e8ad00596bd96db644c175c6e4",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-829722",
"c1f655c99f62"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-829722 -n addons-829722
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-829722 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-829722 logs -n 25: (1.733328829s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
| addons | enable dashboard -p | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:45 UTC | |
| | addons-829722 | | | | | |
| addons | disable dashboard -p | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:45 UTC | |
| | addons-829722 | | | | | |
| start | -p addons-829722 --wait=true | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:45 UTC | 20 Sep 24 17:50 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-829722 addons disable | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:50 UTC | 20 Sep 24 17:51 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | enable headlamp | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
| | -p addons-829722 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-829722 addons disable | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-829722 addons disable | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
| | -p addons-829722 | | | | | |
| ssh | addons-829722 ssh cat | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | 20 Sep 24 17:59 UTC |
| | /opt/local-path-provisioner/pvc-7bce8c94-2047-46e9-95f9-205615c8956a_default_test-pvc/file1 | | | | | |
| addons | addons-829722 addons disable | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 17:59 UTC | |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-829722 ip | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
| addons | addons-829722 addons disable | addons-829722 | g528047478195_compute | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 17:45:57
Running on machine: cs-905301410258-default
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 17:45:57.591796 133751 out.go:345] Setting OutFile to fd 1 ...
I0920 17:45:57.592027 133751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:45:57.592041 133751 out.go:358] Setting ErrFile to fd 2...
I0920 17:45:57.592050 133751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:45:57.592304 133751 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/bin
W0920 17:45:57.592568 133751 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/config/config.json: no such file or directory
I0920 17:45:57.593136 133751 out.go:352] Setting JSON to false
I0920 17:45:57.594478 133751 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":4940,"bootTime":1726849418,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
I0920 17:45:57.594554 133751 start.go:139] virtualization: guest
I0920 17:45:57.598948 133751 out.go:177] * [addons-829722] minikube v1.34.0 on Ubuntu 22.04 (amd64)
W0920 17:45:57.602083 133751 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cache/preloaded-tarball: no such file or directory
I0920 17:45:57.602140 133751 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 17:45:57.602207 133751 notify.go:220] Checking for updates...
I0920 17:45:57.605140 133751 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 17:45:57.608459 133751 out.go:177] - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19679-127678/kubeconfig
I0920 17:45:57.611577 133751 out.go:177] - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19679-127678/.minikube
I0920 17:45:57.614473 133751 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 17:45:57.617631 133751 out.go:177] - MINIKUBE_WANTUPDATENOTIFICATION=false
I0920 17:45:57.621584 133751 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 17:45:57.662634 133751 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
I0920 17:45:57.662811 133751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0920 17:45:57.750811 133751 info.go:266] docker info: {ID:18138aa1-da27-4302-bca7-bf2dfc17b20f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-20 17:45:57.730724426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
I0920 17:45:57.750986 133751 docker.go:318] overlay module found
I0920 17:45:57.754495 133751 out.go:177] * Using the docker driver based on user configuration
I0920 17:45:57.757794 133751 start.go:297] selected driver: docker
I0920 17:45:57.757824 133751 start.go:901] validating driver "docker" against <nil>
I0920 17:45:57.757846 133751 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 17:45:57.758493 133751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0920 17:45:57.838431 133751 info.go:266] docker info: {ID:18138aa1-da27-4302-bca7-bf2dfc17b20f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-20 17:45:57.822506502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
I0920 17:45:57.838650 133751 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 17:45:57.839432 133751 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
I0920 17:45:57.839455 133751 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
I0920 17:45:57.839505 133751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 17:45:57.843010 133751 out.go:177] * Using Docker driver with root privileges
I0920 17:45:57.845562 133751 cni.go:84] Creating CNI manager for ""
I0920 17:45:57.845779 133751 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 17:45:57.845810 133751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 17:45:57.845960 133751 start.go:340] cluster config:
{Name:addons-829722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-829722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 17:45:57.848934 133751 out.go:177] * Starting "addons-829722" primary control-plane node in "addons-829722" cluster
I0920 17:45:57.852125 133751 cache.go:121] Beginning downloading kic base image for docker with docker
I0920 17:45:57.854936 133751 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
I0920 17:45:57.857988 133751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:45:57.858202 133751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
I0920 17:45:57.882436 133751 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
I0920 17:45:57.882867 133751 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
I0920 17:45:57.883015 133751 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
I0920 17:45:57.891597 133751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I0920 17:45:57.891634 133751 cache.go:56] Caching tarball of preloaded images
I0920 17:45:57.892043 133751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:45:57.895986 133751 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
I0920 17:45:57.898920 133751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
I0920 17:45:57.933210 133751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I0920 17:46:01.853305 133751 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
I0920 17:46:01.853496 133751 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
I0920 17:46:03.349226 133751 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 17:46:03.350109 133751 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/config.json ...
I0920 17:46:03.350169 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/config.json: {Name:mk07f1ae899e595c9aa0079b40e773b337a67fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:07.399380 133751 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
I0920 17:46:07.399400 133751 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
I0920 17:46:32.144290 133751 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
I0920 17:46:32.144343 133751 cache.go:194] Successfully downloaded all kic artifacts
I0920 17:46:32.144410 133751 start.go:360] acquireMachinesLock for addons-829722: {Name:mk3be20b4d8b75854ab82fb33219a83f94867826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 17:46:32.144816 133751 start.go:364] duration metric: took 349.223µs to acquireMachinesLock for "addons-829722"
I0920 17:46:32.144903 133751 start.go:93] Provisioning new machine with config: &{Name:addons-829722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-829722 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 17:46:32.145065 133751 start.go:125] createHost starting for "" (driver="docker")
I0920 17:46:32.149786 133751 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0920 17:46:32.150283 133751 start.go:159] libmachine.API.Create for "addons-829722" (driver="docker")
I0920 17:46:32.150336 133751 client.go:168] LocalClient.Create starting
I0920 17:46:32.150490 133751 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem
I0920 17:46:32.309448 133751 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/cert.pem
I0920 17:46:32.505181 133751 cli_runner.go:164] Run: docker network inspect addons-829722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 17:46:32.528952 133751 cli_runner.go:211] docker network inspect addons-829722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 17:46:32.529100 133751 network_create.go:284] running [docker network inspect addons-829722] to gather additional debugging logs...
I0920 17:46:32.529129 133751 cli_runner.go:164] Run: docker network inspect addons-829722
W0920 17:46:32.554256 133751 cli_runner.go:211] docker network inspect addons-829722 returned with exit code 1
I0920 17:46:32.554294 133751 network_create.go:287] error running [docker network inspect addons-829722]: docker network inspect addons-829722: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-829722 not found
I0920 17:46:32.554315 133751 network_create.go:289] output of [docker network inspect addons-829722]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-829722 not found
** /stderr **
I0920 17:46:32.554495 133751 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 17:46:32.578021 133751 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc01686d6c0}
I0920 17:46:32.578085 133751 network_create.go:124] attempt to create docker network addons-829722 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
I0920 17:46:32.578199 133751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-829722 addons-829722
I0920 17:46:32.677228 133751 network_create.go:108] docker network addons-829722 192.168.49.0/24 created
I0920 17:46:32.677272 133751 kic.go:121] calculated static IP "192.168.49.2" for the "addons-829722" container
I0920 17:46:32.677421 133751 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0920 17:46:32.701666 133751 cli_runner.go:164] Run: docker volume create addons-829722 --label name.minikube.sigs.k8s.io=addons-829722 --label created_by.minikube.sigs.k8s.io=true
I0920 17:46:32.729163 133751 oci.go:103] Successfully created a docker volume addons-829722
I0920 17:46:32.729300 133751 cli_runner.go:164] Run: docker run --rm --name addons-829722-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-829722 --entrypoint /usr/bin/test -v addons-829722:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
I0920 17:46:37.014292 133751 cli_runner.go:217] Completed: docker run --rm --name addons-829722-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-829722 --entrypoint /usr/bin/test -v addons-829722:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.284943338s)
I0920 17:46:37.014342 133751 oci.go:107] Successfully prepared a docker volume addons-829722
I0920 17:46:37.014366 133751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:46:37.014398 133751 kic.go:194] Starting extracting preloaded images to volume ...
I0920 17:46:37.014530 133751 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-829722:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
I0920 17:46:44.540461 133751 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-829722:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (7.52584247s)
I0920 17:46:44.540507 133751 kic.go:203] duration metric: took 7.526104146s to extract preloaded images to volume ...
W0920 17:46:44.540689 133751 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0920 17:46:44.540902 133751 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0920 17:46:44.540989 133751 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0920 17:46:44.635476 133751 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-829722 --name addons-829722 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-829722 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-829722 --network addons-829722 --ip 192.168.49.2 --volume addons-829722:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
I0920 17:46:45.068914 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Running}}
I0920 17:46:45.112863 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:46:45.153450 133751 cli_runner.go:164] Run: docker exec addons-829722 stat /var/lib/dpkg/alternatives/iptables
I0920 17:46:45.249807 133751 oci.go:144] the created container "addons-829722" has a running status.
I0920 17:46:45.249849 133751 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa...
I0920 17:46:45.731786 133751 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0920 17:46:45.830854 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:46:45.874578 133751 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0920 17:46:45.874605 133751 kic_runner.go:114] Args: [docker exec --privileged addons-829722 chown docker:docker /home/docker/.ssh/authorized_keys]
I0920 17:46:46.006889 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:46:46.068866 133751 machine.go:93] provisionDockerMachine start ...
I0920 17:46:46.069016 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:46.139926 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:46.140274 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:46.140294 133751 main.go:141] libmachine: About to run SSH command:
hostname
I0920 17:46:46.336429 133751 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-829722
I0920 17:46:46.336622 133751 ubuntu.go:169] provisioning hostname "addons-829722"
I0920 17:46:46.336790 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:46.379986 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:46.380298 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:46.380342 133751 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-829722 && echo "addons-829722" | sudo tee /etc/hostname
I0920 17:46:46.601249 133751 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-829722
I0920 17:46:46.601480 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:46.633570 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:46.633907 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:46.633936 133751 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-829722' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-829722/g' /etc/hosts;
else
echo '127.0.1.1 addons-829722' | sudo tee -a /etc/hosts;
fi
fi
I0920 17:46:46.796145 133751 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 17:46:46.796252 133751 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube}
I0920 17:46:46.796331 133751 ubuntu.go:177] setting up certificates
I0920 17:46:46.796352 133751 provision.go:84] configureAuth start
I0920 17:46:46.796506 133751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-829722
I0920 17:46:46.822147 133751 provision.go:143] copyHostCerts
I0920 17:46:46.822256 133751 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.pem (1119 bytes)
I0920 17:46:46.822472 133751 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/cert.pem (1159 bytes)
I0920 17:46:46.822600 133751 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/key.pem (1679 bytes)
I0920 17:46:46.822739 133751 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-829722 san=[127.0.0.1 192.168.49.2 addons-829722 localhost minikube]
I0920 17:46:46.983567 133751 provision.go:177] copyRemoteCerts
I0920 17:46:46.983686 133751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0920 17:46:46.983809 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:47.008919 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:46:47.115593 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
I0920 17:46:47.152935 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0920 17:46:47.189819 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
I0920 17:46:47.227161 133751 provision.go:87] duration metric: took 430.785101ms to configureAuth
I0920 17:46:47.227202 133751 ubuntu.go:193] setting minikube options for container-runtime
I0920 17:46:47.227500 133751 config.go:182] Loaded profile config "addons-829722": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:46:47.227614 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:47.253226 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:47.253540 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:47.253564 133751 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0920 17:46:47.403589 133751 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0920 17:46:47.403621 133751 ubuntu.go:71] root file system type: overlay
I0920 17:46:47.403837 133751 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0920 17:46:47.403956 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:47.432013 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:47.432351 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:47.432570 133751 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0920 17:46:47.603454 133751 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0920 17:46:47.603737 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:47.631138 133751 main.go:141] libmachine: Using SSH client type: native
I0920 17:46:47.631437 133751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32808 <nil> <nil>}
I0920 17:46:47.631472 133751 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0920 17:46:48.727581 133751 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:41.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-20 17:46:47.600500464 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0920 17:46:48.727733 133751 machine.go:96] duration metric: took 2.658812694s to provisionDockerMachine
I0920 17:46:48.727797 133751 client.go:171] duration metric: took 16.577450169s to LocalClient.Create
I0920 17:46:48.727877 133751 start.go:167] duration metric: took 16.577574979s to libmachine.API.Create "addons-829722"
I0920 17:46:48.727924 133751 start.go:293] postStartSetup for "addons-829722" (driver="docker")
I0920 17:46:48.727959 133751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 17:46:48.728107 133751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 17:46:48.728248 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:48.758136 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:46:48.863918 133751 ssh_runner.go:195] Run: cat /etc/os-release
I0920 17:46:48.868881 133751 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0920 17:46:48.868930 133751 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0920 17:46:48.868946 133751 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0920 17:46:48.868959 133751 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0920 17:46:48.868976 133751 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/addons for local assets ...
I0920 17:46:48.869103 133751 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/files for local assets ...
I0920 17:46:48.869154 133751 start.go:296] duration metric: took 141.20175ms for postStartSetup
I0920 17:46:48.869687 133751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-829722
I0920 17:46:48.894645 133751 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/config.json ...
I0920 17:46:48.895148 133751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0920 17:46:48.895251 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:48.921605 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:46:49.021092 133751 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0920 17:46:49.028095 133751 start.go:128] duration metric: took 16.883003447s to createHost
I0920 17:46:49.028136 133751 start.go:83] releasing machines lock for "addons-829722", held for 16.883269768s
I0920 17:46:49.028282 133751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-829722
I0920 17:46:49.057375 133751 ssh_runner.go:195] Run: cat /version.json
I0920 17:46:49.057430 133751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0920 17:46:49.057461 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:49.057525 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:46:49.089798 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:46:49.105212 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:46:49.202741 133751 ssh_runner.go:195] Run: systemctl --version
I0920 17:46:49.361616 133751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0920 17:46:49.371392 133751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0920 17:46:49.431634 133751 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0920 17:46:49.431852 133751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 17:46:49.472990 133751 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0920 17:46:49.473040 133751 start.go:495] detecting cgroup driver to use...
I0920 17:46:49.473112 133751 detect.go:190] detected "systemd" cgroup driver on host os
I0920 17:46:49.473303 133751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 17:46:49.498975 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0920 17:46:49.514164 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0920 17:46:49.529308 133751 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I0920 17:46:49.529468 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0920 17:46:49.544586 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 17:46:49.559614 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0920 17:46:49.574122 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 17:46:49.589029 133751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 17:46:49.602833 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0920 17:46:49.617634 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0920 17:46:49.632833 133751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0920 17:46:49.648126 133751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 17:46:49.661464 133751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 17:46:49.675381 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:46:49.806687 133751 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0920 17:46:49.915599 133751 start.go:495] detecting cgroup driver to use...
I0920 17:46:49.915655 133751 detect.go:190] detected "systemd" cgroup driver on host os
I0920 17:46:49.915754 133751 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0920 17:46:49.949917 133751 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0920 17:46:49.950026 133751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0920 17:46:49.978636 133751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 17:46:50.016334 133751 ssh_runner.go:195] Run: which cri-dockerd
I0920 17:46:50.024123 133751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0920 17:46:50.043644 133751 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0920 17:46:50.081079 133751 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0920 17:46:50.295673 133751 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0920 17:46:50.488617 133751 docker.go:574] configuring docker to use "systemd" as cgroup driver...
I0920 17:46:50.488843 133751 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I0920 17:46:50.517529 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:46:50.646629 133751 ssh_runner.go:195] Run: sudo systemctl restart docker
I0920 17:46:51.060533 133751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0920 17:46:51.078682 133751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 17:46:51.096608 133751 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0920 17:46:51.232929 133751 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0920 17:46:51.364826 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:46:51.492462 133751 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0920 17:46:51.519638 133751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 17:46:51.537141 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:46:51.663968 133751 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0920 17:46:51.766387 133751 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0920 17:46:51.766695 133751 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0920 17:46:51.774994 133751 start.go:563] Will wait 60s for crictl version
I0920 17:46:51.775126 133751 ssh_runner.go:195] Run: which crictl
I0920 17:46:51.782426 133751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0920 17:46:51.839104 133751 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0920 17:46:51.839228 133751 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0920 17:46:51.887664 133751 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0920 17:46:51.939651 133751 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0920 17:46:51.939858 133751 cli_runner.go:164] Run: docker network inspect addons-829722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 17:46:51.963913 133751 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0920 17:46:51.969530 133751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 17:46:51.992599 133751 out.go:177] - kubelet.cgroups-per-qos=false
I0920 17:46:51.995888 133751 out.go:177] - kubelet.enforce-node-allocatable=""
I0920 17:46:51.999056 133751 kubeadm.go:883] updating cluster {Name:addons-829722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-829722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 17:46:51.999250 133751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:46:51.999397 133751 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0920 17:46:52.028726 133751 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0920 17:46:52.028769 133751 docker.go:615] Images already preloaded, skipping extraction
I0920 17:46:52.028879 133751 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0920 17:46:52.057190 133751 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0920 17:46:52.057223 133751 cache_images.go:84] Images are preloaded, skipping loading
I0920 17:46:52.057240 133751 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0920 17:46:52.057395 133751 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-829722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-829722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0920 17:46:52.057528 133751 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0920 17:46:52.131912 133751 cni.go:84] Creating CNI manager for ""
I0920 17:46:52.131952 133751 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 17:46:52.131970 133751 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 17:46:52.132014 133751 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-829722 NodeName:addons-829722 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 17:46:52.132293 133751 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-829722"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 17:46:52.132484 133751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 17:46:52.146478 133751 binaries.go:44] Found k8s binaries, skipping transfer
I0920 17:46:52.146599 133751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 17:46:52.160468 133751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
I0920 17:46:52.188614 133751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 17:46:52.216982 133751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
I0920 17:46:52.244900 133751 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0920 17:46:52.250478 133751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 17:46:52.267408 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:46:52.395628 133751 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 17:46:52.425128 133751 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722 for IP: 192.168.49.2
I0920 17:46:52.425163 133751 certs.go:194] generating shared ca certs ...
I0920 17:46:52.425189 133751 certs.go:226] acquiring lock for ca certs: {Name:mkdfac9b3222e569795a6a50d3b6fe22c86a3d02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:52.425568 133751 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.key
I0920 17:46:52.570534 133751 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.crt ...
I0920 17:46:52.570572 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.crt: {Name:mka86eb089768de9671a9d1c92269f4033de25b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:52.571034 133751 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.key ...
I0920 17:46:52.571061 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.key: {Name:mk9171332de1bb69c394102d932d7285f9f9ecbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:52.571390 133751 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.key
I0920 17:46:52.833936 133751 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.crt ...
I0920 17:46:52.833973 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.crt: {Name:mka8e28504f57957db5cea929fda1c7679004d49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:52.834412 133751 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.key ...
I0920 17:46:52.834438 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.key: {Name:mk94b3dc91eb3a2c4ce80f4b557a5e8fc3dde537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:52.834877 133751 certs.go:256] generating profile certs ...
I0920 17:46:52.835036 133751 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.key
I0920 17:46:52.835088 133751 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.crt with IP's: []
I0920 17:46:53.021540 133751 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.crt ...
I0920 17:46:53.021582 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.crt: {Name:mk80e673df4506fc6e67c4ed93807dc14a764656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.022059 133751 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.key ...
I0920 17:46:53.022091 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/client.key: {Name:mkd900f74b1d3c31234479a4808d16f9253d869e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.022417 133751 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key.bd714d7f
I0920 17:46:53.022470 133751 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt.bd714d7f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0920 17:46:53.275908 133751 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt.bd714d7f ...
I0920 17:46:53.275951 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt.bd714d7f: {Name:mk125fa033d4dedda05e8599820ab0846dccd492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.276386 133751 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key.bd714d7f ...
I0920 17:46:53.276415 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key.bd714d7f: {Name:mk7c6d9aa88bf1b07cd4b121e17912f0cf5dd10c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.276761 133751 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt.bd714d7f -> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt
I0920 17:46:53.276939 133751 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key.bd714d7f -> /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key
I0920 17:46:53.277052 133751 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.key
I0920 17:46:53.277088 133751 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.crt with IP's: []
I0920 17:46:53.632872 133751 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.crt ...
I0920 17:46:53.632913 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.crt: {Name:mkfddf613be5e4ffeda973a2b07c0f921cb8c713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.633388 133751 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.key ...
I0920 17:46:53.633418 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.key: {Name:mk44ce1ea031e80d48c8c3d386e30fe3aedb8a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:46:53.633907 133751 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca-key.pem (1675 bytes)
I0920 17:46:53.633975 133751 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/ca.pem (1119 bytes)
I0920 17:46:53.634042 133751 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/cert.pem (1159 bytes)
I0920 17:46:53.634105 133751 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/certs/key.pem (1679 bytes)
I0920 17:46:53.635028 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 17:46:53.675332 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0920 17:46:53.713427 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 17:46:53.751615 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0920 17:46:53.789152 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0920 17:46:53.825538 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0920 17:46:53.861912 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 17:46:53.898233 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/profiles/addons-829722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0920 17:46:53.935329 133751 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19679-127678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 17:46:53.976113 133751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 17:46:54.005327 133751 ssh_runner.go:195] Run: openssl version
I0920 17:46:54.013474 133751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 17:46:54.028545 133751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 17:46:54.034268 133751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:46 /usr/share/ca-certificates/minikubeCA.pem
I0920 17:46:54.034375 133751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 17:46:54.044310 133751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 17:46:54.061189 133751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 17:46:54.069320 133751 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 17:46:54.069394 133751 kubeadm.go:392] StartCluster: {Name:addons-829722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-829722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 17:46:54.069600 133751 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0920 17:46:54.113057 133751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 17:46:54.136216 133751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 17:46:54.155584 133751 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0920 17:46:54.155688 133751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 17:46:54.179496 133751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 17:46:54.179525 133751 kubeadm.go:157] found existing configuration files:
I0920 17:46:54.179781 133751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 17:46:54.194050 133751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 17:46:54.194247 133751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 17:46:54.207408 133751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 17:46:54.222010 133751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 17:46:54.222125 133751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 17:46:54.235201 133751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 17:46:54.248983 133751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 17:46:54.249098 133751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 17:46:54.262305 133751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 17:46:54.275931 133751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 17:46:54.276033 133751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 17:46:54.289291 133751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0920 17:46:54.346578 133751 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 17:46:54.346903 133751 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 17:46:54.461286 133751 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 17:46:54.461470 133751 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 17:46:54.461613 133751 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 17:46:54.482828 133751 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 17:46:54.487205 133751 out.go:235] - Generating certificates and keys ...
I0920 17:46:54.487356 133751 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 17:46:54.487482 133751 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 17:46:54.886853 133751 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 17:46:55.250756 133751 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 17:46:55.352042 133751 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 17:46:55.522600 133751 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 17:46:55.966852 133751 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 17:46:55.967306 133751 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-829722 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0920 17:46:56.170179 133751 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 17:46:56.170649 133751 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-829722 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0920 17:46:56.400316 133751 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 17:46:56.567019 133751 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 17:46:56.845183 133751 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 17:46:56.845530 133751 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 17:46:56.943117 133751 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 17:46:57.182299 133751 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 17:46:57.518871 133751 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 17:46:57.753793 133751 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 17:46:58.188061 133751 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 17:46:58.188959 133751 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 17:46:58.192250 133751 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 17:46:58.195650 133751 out.go:235] - Booting up control plane ...
I0920 17:46:58.195834 133751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 17:46:58.195954 133751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 17:46:58.196461 133751 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 17:46:58.211306 133751 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 17:46:58.222017 133751 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 17:46:58.222104 133751 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 17:46:58.365263 133751 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 17:46:58.367188 133751 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 17:46:59.369741 133751 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001993792s
I0920 17:46:59.369901 133751 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 17:47:06.372357 133751 kubeadm.go:310] [api-check] The API server is healthy after 7.003193567s
I0920 17:47:06.395831 133751 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 17:47:06.415872 133751 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 17:47:06.450467 133751 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 17:47:06.450790 133751 kubeadm.go:310] [mark-control-plane] Marking the node addons-829722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 17:47:06.465849 133751 kubeadm.go:310] [bootstrap-token] Using token: tnhk1v.kwci8jlo39xos7n0
I0920 17:47:06.469507 133751 out.go:235] - Configuring RBAC rules ...
I0920 17:47:06.469693 133751 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 17:47:06.474953 133751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 17:47:06.485117 133751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 17:47:06.489646 133751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 17:47:06.494416 133751 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 17:47:06.501029 133751 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 17:47:06.786198 133751 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 17:47:07.364337 133751 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 17:47:07.786280 133751 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 17:47:07.787992 133751 kubeadm.go:310]
I0920 17:47:07.788109 133751 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 17:47:07.788121 133751 kubeadm.go:310]
I0920 17:47:07.788322 133751 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 17:47:07.788332 133751 kubeadm.go:310]
I0920 17:47:07.788377 133751 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 17:47:07.788476 133751 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 17:47:07.788560 133751 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 17:47:07.788570 133751 kubeadm.go:310]
I0920 17:47:07.788662 133751 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 17:47:07.788671 133751 kubeadm.go:310]
I0920 17:47:07.788770 133751 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 17:47:07.788782 133751 kubeadm.go:310]
I0920 17:47:07.788873 133751 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 17:47:07.789011 133751 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 17:47:07.789136 133751 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 17:47:07.789147 133751 kubeadm.go:310]
I0920 17:47:07.789297 133751 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 17:47:07.789437 133751 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 17:47:07.789447 133751 kubeadm.go:310]
I0920 17:47:07.789593 133751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tnhk1v.kwci8jlo39xos7n0 \
I0920 17:47:07.789959 133751 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:26826a6d94638730e9447e756bcb18ab1c8b9e73ea65446fe36b072fb73d8201 \
I0920 17:47:07.790007 133751 kubeadm.go:310] --control-plane
I0920 17:47:07.790016 133751 kubeadm.go:310]
I0920 17:47:07.790133 133751 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 17:47:07.790142 133751 kubeadm.go:310]
I0920 17:47:07.790266 133751 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tnhk1v.kwci8jlo39xos7n0 \
I0920 17:47:07.790431 133751 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:26826a6d94638730e9447e756bcb18ab1c8b9e73ea65446fe36b072fb73d8201
I0920 17:47:07.795232 133751 kubeadm.go:310] W0920 17:46:54.342424 1687 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 17:47:07.795789 133751 kubeadm.go:310] W0920 17:46:54.343633 1687 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 17:47:07.796020 133751 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 17:47:07.796045 133751 cni.go:84] Creating CNI manager for ""
I0920 17:47:07.796069 133751 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 17:47:07.799366 133751 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 17:47:07.801939 133751 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0920 17:47:07.817456 133751 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 17:47:07.847220 133751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 17:47:07.847336 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:07.847429 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-829722 minikube.k8s.io/updated_at=2024_09_20T17_47_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-829722 minikube.k8s.io/primary=true
I0920 17:47:08.054273 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:08.054347 133751 ops.go:34] apiserver oom_adj: -16
I0920 17:47:08.555212 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:09.054430 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:09.555337 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:10.054467 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:10.554913 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:11.054602 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:11.555038 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:12.054976 133751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 17:47:12.236798 133751 kubeadm.go:1113] duration metric: took 4.38957622s to wait for elevateKubeSystemPrivileges
I0920 17:47:12.236841 133751 kubeadm.go:394] duration metric: took 18.167457185s to StartCluster
I0920 17:47:12.236868 133751 settings.go:142] acquiring lock: {Name:mka2a1b25ebbb3c3c4398b936537ed668f35ceb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:47:12.237213 133751 settings.go:150] Updating kubeconfig: /home/g528047478195_compute/minikube-integration/19679-127678/kubeconfig
I0920 17:47:12.238023 133751 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19679-127678/kubeconfig: {Name:mk5ae8cbc61ce5f7c5b6d38d0563c86838eaf325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 17:47:12.238485 133751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 17:47:12.238842 133751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 17:47:12.239185 133751 config.go:182] Loaded profile config "addons-829722": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:47:12.239222 133751 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 17:47:12.239345 133751 addons.go:69] Setting yakd=true in profile "addons-829722"
I0920 17:47:12.239388 133751 addons.go:234] Setting addon yakd=true in "addons-829722"
I0920 17:47:12.239433 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.240233 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.240830 133751 addons.go:69] Setting inspektor-gadget=true in profile "addons-829722"
I0920 17:47:12.240864 133751 addons.go:234] Setting addon inspektor-gadget=true in "addons-829722"
I0920 17:47:12.240905 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.241671 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.242024 133751 addons.go:69] Setting metrics-server=true in profile "addons-829722"
I0920 17:47:12.242051 133751 addons.go:234] Setting addon metrics-server=true in "addons-829722"
I0920 17:47:12.242087 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.242873 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.245599 133751 addons.go:69] Setting cloud-spanner=true in profile "addons-829722"
I0920 17:47:12.245635 133751 addons.go:234] Setting addon cloud-spanner=true in "addons-829722"
I0920 17:47:12.245675 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.246432 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.252807 133751 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-829722"
I0920 17:47:12.252910 133751 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-829722"
I0920 17:47:12.252961 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.253406 133751 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-829722"
I0920 17:47:12.253443 133751 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-829722"
I0920 17:47:12.253480 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.253846 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.255098 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.264104 133751 addons.go:69] Setting default-storageclass=true in profile "addons-829722"
I0920 17:47:12.264150 133751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-829722"
I0920 17:47:12.264664 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.265156 133751 addons.go:69] Setting registry=true in profile "addons-829722"
I0920 17:47:12.265188 133751 addons.go:234] Setting addon registry=true in "addons-829722"
I0920 17:47:12.265271 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.266245 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.284526 133751 addons.go:69] Setting gcp-auth=true in profile "addons-829722"
I0920 17:47:12.284593 133751 mustload.go:65] Loading cluster: addons-829722
I0920 17:47:12.285377 133751 config.go:182] Loaded profile config "addons-829722": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:47:12.285977 133751 addons.go:69] Setting storage-provisioner=true in profile "addons-829722"
I0920 17:47:12.286004 133751 addons.go:234] Setting addon storage-provisioner=true in "addons-829722"
I0920 17:47:12.286050 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.286893 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.286934 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.302552 133751 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-829722"
I0920 17:47:12.302592 133751 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-829722"
I0920 17:47:12.306109 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.306651 133751 addons.go:69] Setting ingress=true in profile "addons-829722"
I0920 17:47:12.306679 133751 addons.go:234] Setting addon ingress=true in "addons-829722"
I0920 17:47:12.306792 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.307516 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.336058 133751 addons.go:69] Setting volcano=true in profile "addons-829722"
I0920 17:47:12.336100 133751 addons.go:234] Setting addon volcano=true in "addons-829722"
I0920 17:47:12.336188 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.337154 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.343045 133751 addons.go:69] Setting ingress-dns=true in profile "addons-829722"
I0920 17:47:12.343086 133751 addons.go:234] Setting addon ingress-dns=true in "addons-829722"
I0920 17:47:12.343173 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.344134 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.374625 133751 addons.go:69] Setting volumesnapshots=true in profile "addons-829722"
I0920 17:47:12.374667 133751 addons.go:234] Setting addon volumesnapshots=true in "addons-829722"
I0920 17:47:12.374816 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:12.375744 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:12.400986 133751 out.go:177] * Verifying Kubernetes components...
I0920 17:47:12.549492 133751 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 17:47:12.598068 133751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 17:47:12.661352 133751 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 17:47:12.661457 133751 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 17:47:12.661584 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.672203 133751 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 17:47:12.680783 133751 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 17:47:12.683481 133751 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 17:47:12.688038 133751 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 17:47:12.688144 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 17:47:12.688319 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.714576 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 17:47:12.714737 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.773464 133751 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0920 17:47:12.778462 133751 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 17:47:12.778498 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0920 17:47:12.778607 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.888922 133751 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 17:47:12.905349 133751 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 17:47:12.906000 133751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 17:47:12.913083 133751 out.go:177] - Using image docker.io/registry:2.8.3
I0920 17:47:12.917250 133751 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 17:47:12.917345 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 17:47:12.917515 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.935087 133751 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 17:47:12.935128 133751 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 17:47:12.959815 133751 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 17:47:12.960117 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.963015 133751 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 17:47:12.963146 133751 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 17:47:12.963341 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:12.987397 133751 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0920 17:47:12.995616 133751 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 17:47:12.995740 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 17:47:12.995933 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.171086 133751 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0920 17:47:13.174211 133751 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0920 17:47:13.177521 133751 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0920 17:47:13.182802 133751 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0920 17:47:13.183119 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0920 17:47:13.183313 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.198548 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.204749 133751 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 17:47:13.207534 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 17:47:13.207638 133751 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 17:47:13.207878 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.225149 133751 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 17:47:13.228642 133751 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 17:47:13.232112 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 17:47:13.236986 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 17:47:13.242949 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 17:47:13.247387 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 17:47:13.252787 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 17:47:13.256941 133751 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 17:47:13.261809 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 17:47:13.261845 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 17:47:13.261982 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.279559 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.307816 133751 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-829722"
I0920 17:47:13.307957 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:13.308686 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:13.336238 133751 cli_runner.go:217] Completed: docker container inspect addons-829722 --format={{.State.Status}}: (1.028666987s)
I0920 17:47:13.341778 133751 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0920 17:47:13.344100 133751 cli_runner.go:217] Completed: docker container inspect addons-829722 --format={{.State.Status}}: (1.079389888s)
I0920 17:47:13.345542 133751 addons.go:234] Setting addon default-storageclass=true in "addons-829722"
I0920 17:47:13.345601 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:13.347561 133751 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 17:47:13.350629 133751 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 17:47:13.353489 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:13.356118 133751 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0920 17:47:13.356147 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0920 17:47:13.356256 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.424992 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.426021 133751 cli_runner.go:217] Completed: docker container inspect addons-829722 --format={{.State.Status}}: (1.138788286s)
I0920 17:47:13.426070 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:13.451274 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.496840 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.506330 133751 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 17:47:13.568778 133751 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 17:47:13.572064 133751 out.go:177] - Using image docker.io/busybox:stable
I0920 17:47:13.577453 133751 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 17:47:13.577486 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 17:47:13.577621 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.609249 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.623690 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.633092 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.644299 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.646060 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.703103 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.777616 133751 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 17:47:13.777650 133751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 17:47:13.777909 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:13.783089 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
W0920 17:47:13.789037 133751 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0920 17:47:13.789083 133751 retry.go:31] will retry after 234.978314ms: ssh: handshake failed: EOF
I0920 17:47:13.825402 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:13.892356 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:14.217305 133751 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 17:47:14.217410 133751 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 17:47:14.455632 133751 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 17:47:14.455667 133751 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 17:47:14.491884 133751 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 17:47:14.491935 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 17:47:14.513213 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0920 17:47:14.544902 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 17:47:14.544940 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 17:47:14.570618 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 17:47:14.611295 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0920 17:47:14.635431 133751 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 17:47:14.635506 133751 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 17:47:14.665170 133751 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 17:47:14.665210 133751 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 17:47:14.730151 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 17:47:14.772831 133751 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 17:47:14.772895 133751 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 17:47:14.785968 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 17:47:14.812998 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 17:47:14.865556 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 17:47:14.876237 133751 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 17:47:14.876271 133751 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 17:47:14.882209 133751 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 17:47:14.882241 133751 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 17:47:14.897819 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 17:47:14.974036 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 17:47:14.974085 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 17:47:15.035160 133751 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 17:47:15.035190 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 17:47:15.082218 133751 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 17:47:15.082250 133751 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 17:47:15.201839 133751 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 17:47:15.201876 133751 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 17:47:15.345897 133751 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 17:47:15.345932 133751 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 17:47:15.368632 133751 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 17:47:15.368668 133751 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 17:47:15.400079 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 17:47:15.400114 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 17:47:15.495112 133751 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.589019249s)
I0920 17:47:15.495179 133751 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0920 17:47:15.497016 133751 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.990646102s)
I0920 17:47:15.498315 133751 node_ready.go:35] waiting up to 6m0s for node "addons-829722" to be "Ready" ...
I0920 17:47:15.521814 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 17:47:15.549592 133751 node_ready.go:49] node "addons-829722" has status "Ready":"True"
I0920 17:47:15.549632 133751 node_ready.go:38] duration metric: took 51.269074ms for node "addons-829722" to be "Ready" ...
I0920 17:47:15.549646 133751 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 17:47:15.593921 133751 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 17:47:15.593964 133751 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 17:47:15.604626 133751 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 17:47:15.604654 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 17:47:15.693188 133751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace to be "Ready" ...
I0920 17:47:15.770081 133751 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 17:47:15.770119 133751 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 17:47:15.781901 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 17:47:15.781938 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 17:47:15.787241 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 17:47:16.011041 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 17:47:16.099675 133751 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 17:47:16.099780 133751 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 17:47:16.149376 133751 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-829722" context rescaled to 1 replicas
I0920 17:47:16.204735 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 17:47:16.204774 133751 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 17:47:16.240520 133751 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 17:47:16.240557 133751 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 17:47:16.539535 133751 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 17:47:16.539571 133751 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 17:47:16.631312 133751 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 17:47:16.631347 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 17:47:16.857308 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 17:47:16.857342 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 17:47:17.261993 133751 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 17:47:17.262024 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 17:47:17.500656 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 17:47:17.868376 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 17:47:17.868418 133751 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 17:47:17.960455 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 17:47:18.125119 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:18.379191 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 17:47:18.379226 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 17:47:18.859159 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 17:47:18.859189 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 17:47:19.693767 133751 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 17:47:19.693800 133751 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 17:47:19.897077 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 17:47:20.237640 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:22.381092 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:24.394520 133751 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 17:47:24.394655 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:24.447816 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:24.677674 133751 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 17:47:24.736317 133751 addons.go:234] Setting addon gcp-auth=true in "addons-829722"
I0920 17:47:24.736397 133751 host.go:66] Checking if "addons-829722" exists ...
I0920 17:47:24.737289 133751 cli_runner.go:164] Run: docker container inspect addons-829722 --format={{.State.Status}}
I0920 17:47:24.794608 133751 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 17:47:24.794696 133751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-829722
I0920 17:47:24.852447 133751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19679-127678/.minikube/machines/addons-829722/id_rsa Username:docker}
I0920 17:47:24.935274 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:27.638534 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:29.904126 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:32.018611 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:34.378235 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:36.522534 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (22.009267189s)
I0920 17:47:36.522584 133751 addons.go:475] Verifying addon ingress=true in "addons-829722"
I0920 17:47:36.522954 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (21.952287369s)
I0920 17:47:36.526814 133751 out.go:177] * Verifying ingress addon...
I0920 17:47:36.531444 133751 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0920 17:47:36.769363 133751 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0920 17:47:36.769401 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:36.817557 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:37.043568 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:37.718905 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:38.146760 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:38.553961 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:39.119350 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (24.508007989s)
I0920 17:47:39.119492 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (24.389305838s)
I0920 17:47:39.119834 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (24.333830216s)
I0920 17:47:39.119926 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (24.306888453s)
I0920 17:47:39.119958 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (24.254374549s)
I0920 17:47:39.120169 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (24.222316689s)
I0920 17:47:39.120220 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (23.598375537s)
I0920 17:47:39.120236 133751 addons.go:475] Verifying addon registry=true in "addons-829722"
I0920 17:47:39.120582 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (23.333299176s)
I0920 17:47:39.120609 133751 addons.go:475] Verifying addon metrics-server=true in "addons-829722"
I0920 17:47:39.120690 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (23.109608826s)
I0920 17:47:39.121222 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (21.620517451s)
W0920 17:47:39.121272 133751 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 17:47:39.121310 133751 retry.go:31] will retry after 143.462165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 17:47:39.121423 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (21.16091745s)
I0920 17:47:39.124283 133751 out.go:177] * Verifying registry addon...
I0920 17:47:39.124529 133751 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-829722 service yakd-dashboard -n yakd-dashboard
I0920 17:47:39.130284 133751 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 17:47:39.266098 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 17:47:39.537980 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:39.539486 133751 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 17:47:39.539510 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:39.677232 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:39.691786 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W0920 17:47:39.704860 133751 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0920 17:47:40.007078 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:40.306754 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (20.409594816s)
I0920 17:47:40.306796 133751 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-829722"
I0920 17:47:40.307368 133751 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (15.51271767s)
I0920 17:47:40.310850 133751 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 17:47:40.311018 133751 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 17:47:40.313943 133751 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 17:47:40.315394 133751 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 17:47:40.317682 133751 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 17:47:40.317727 133751 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 17:47:40.356926 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:40.357900 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:40.497204 133751 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 17:47:40.497327 133751 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 17:47:40.721976 133751 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 17:47:40.722017 133751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 17:47:40.804019 133751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 17:47:41.235659 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:41.237506 133751 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 17:47:41.237856 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:41.275582 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:41.431178 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:41.431449 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:41.778484 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:41.778997 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:41.983576 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:42.027431 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:42.048006 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:42.147977 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:42.190952 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:42.270792 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:42.811557 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:42.820316 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:42.829325 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:42.932672 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:43.066742 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:43.414661 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:43.698080 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:43.940869 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:43.942385 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:44.023013 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:44.177654 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:44.186088 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:44.320105 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:44.343931 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:44.453863 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.187680535s)
I0920 17:47:44.570328 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:44.894377 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:44.937668 133751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (4.133513643s)
I0920 17:47:44.943934 133751 addons.go:475] Verifying addon gcp-auth=true in "addons-829722"
I0920 17:47:44.947490 133751 out.go:177] * Verifying gcp-auth addon...
I0920 17:47:44.952264 133751 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 17:47:44.971323 133751 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 17:47:44.973740 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:45.046856 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:45.152813 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:45.323386 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:45.537213 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:45.728188 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:45.873123 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:46.073483 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:46.136454 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:46.323831 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:46.537841 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:46.635860 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:46.707257 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:46.824816 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:47.040637 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:47.145571 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:47.360292 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:47.569640 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:47.654082 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:47.838413 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:48.049926 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:48.153012 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:48.324305 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:48.540329 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:48.639517 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:48.826187 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:49.039494 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:49.138042 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:49.271087 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:49.345593 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:49.539873 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:49.635817 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:49.824699 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:50.038757 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:50.137311 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:50.326296 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:50.542531 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:50.639148 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:50.822965 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:51.038334 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:51.145859 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:51.324468 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:51.541179 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:51.647767 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:51.705426 133751 pod_ready.go:103] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"False"
I0920 17:47:51.825978 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:52.078327 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:52.159161 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:52.204698 133751 pod_ready.go:93] pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.204839 133751 pod_ready.go:82] duration metric: took 36.511601785s for pod "coredns-7c65d6cfc9-8m4r5" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.204902 133751 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s2shn" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.209508 133751 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-s2shn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s2shn" not found
I0920 17:47:52.209614 133751 pod_ready.go:82] duration metric: took 4.665407ms for pod "coredns-7c65d6cfc9-s2shn" in "kube-system" namespace to be "Ready" ...
E0920 17:47:52.209673 133751 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-s2shn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s2shn" not found
I0920 17:47:52.209736 133751 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.225110 133751 pod_ready.go:93] pod "etcd-addons-829722" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.225240 133751 pod_ready.go:82] duration metric: took 15.464938ms for pod "etcd-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.225307 133751 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.238954 133751 pod_ready.go:93] pod "kube-apiserver-addons-829722" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.239058 133751 pod_ready.go:82] duration metric: took 13.704702ms for pod "kube-apiserver-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.239134 133751 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.249154 133751 pod_ready.go:93] pod "kube-controller-manager-addons-829722" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.249257 133751 pod_ready.go:82] duration metric: took 10.073708ms for pod "kube-controller-manager-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.249327 133751 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22p56" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.332122 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:52.399530 133751 pod_ready.go:93] pod "kube-proxy-22p56" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.399650 133751 pod_ready.go:82] duration metric: took 150.275211ms for pod "kube-proxy-22p56" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.399746 133751 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.542983 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:52.642329 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:52.798670 133751 pod_ready.go:93] pod "kube-scheduler-addons-829722" in "kube-system" namespace has status "Ready":"True"
I0920 17:47:52.798722 133751 pod_ready.go:82] duration metric: took 398.915518ms for pod "kube-scheduler-addons-829722" in "kube-system" namespace to be "Ready" ...
I0920 17:47:52.798736 133751 pod_ready.go:39] duration metric: took 37.24907471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 17:47:52.798767 133751 api_server.go:52] waiting for apiserver process to appear ...
I0920 17:47:52.799038 133751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 17:47:52.843119 133751 api_server.go:72] duration metric: took 40.604586084s to wait for apiserver process to appear ...
I0920 17:47:52.843340 133751 api_server.go:88] waiting for apiserver healthz status ...
I0920 17:47:52.843426 133751 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0920 17:47:52.847659 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:52.854639 133751 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0920 17:47:52.856337 133751 api_server.go:141] control plane version: v1.31.1
I0920 17:47:52.856459 133751 api_server.go:131] duration metric: took 13.057267ms to wait for apiserver health ...
I0920 17:47:52.856496 133751 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 17:47:53.014556 133751 system_pods.go:59] 17 kube-system pods found
I0920 17:47:53.014624 133751 system_pods.go:61] "coredns-7c65d6cfc9-8m4r5" [2ad34c47-3125-49a2-b5b9-7d3fb4f5f2be] Running
I0920 17:47:53.014653 133751 system_pods.go:61] "csi-hostpath-attacher-0" [7d2cf5dc-13a2-4dd4-953e-1c451c0e76d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 17:47:53.014674 133751 system_pods.go:61] "csi-hostpath-resizer-0" [f47ddf26-747e-4b3e-b0aa-8f7752155c00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 17:47:53.014841 133751 system_pods.go:61] "csi-hostpathplugin-j2jxb" [7004f3b1-e6b5-416d-8325-3ebf8c0a1edb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 17:47:53.014899 133751 system_pods.go:61] "etcd-addons-829722" [f4c6df2a-4926-4b48-9563-49225506df99] Running
I0920 17:47:53.014909 133751 system_pods.go:61] "kube-apiserver-addons-829722" [ba220068-739c-4020-a370-25c8cd972afb] Running
I0920 17:47:53.014916 133751 system_pods.go:61] "kube-controller-manager-addons-829722" [adcf0132-3b0e-4a41-a28b-ba5f1223e995] Running
I0920 17:47:53.014930 133751 system_pods.go:61] "kube-ingress-dns-minikube" [850c5f05-a4ac-4fee-b47a-15c690003e94] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0920 17:47:53.014938 133751 system_pods.go:61] "kube-proxy-22p56" [1eba3420-d573-4cdf-becb-9ebb1b52f030] Running
I0920 17:47:53.014946 133751 system_pods.go:61] "kube-scheduler-addons-829722" [24fb3677-ac3f-4ebb-b100-6ca09bf8ac08] Running
I0920 17:47:53.014957 133751 system_pods.go:61] "metrics-server-84c5f94fbc-ddfdg" [a06cdc89-bb46-4422-89dc-f48d20266ea4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 17:47:53.014968 133751 system_pods.go:61] "nvidia-device-plugin-daemonset-qtldp" [165d6890-5abd-46d5-a11f-63d3c593796d] Running
I0920 17:47:53.014995 133751 system_pods.go:61] "registry-66c9cd494c-442hl" [12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 17:47:53.015007 133751 system_pods.go:61] "registry-proxy-l52wr" [24cb7649-58a6-4012-827b-a27d68665a07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 17:47:53.015049 133751 system_pods.go:61] "snapshot-controller-56fcc65765-b77hh" [47c06696-26b7-48af-98fc-5618bf5400e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 17:47:53.015090 133751 system_pods.go:61] "snapshot-controller-56fcc65765-cprfs" [911f2463-fca0-45d4-904c-dd1af31feecc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 17:47:53.015098 133751 system_pods.go:61] "storage-provisioner" [b30969c8-04f0-4e32-a6a3-40f86b8144f8] Running
I0920 17:47:53.015111 133751 system_pods.go:74] duration metric: took 158.567483ms to wait for pod list to return data ...
I0920 17:47:53.015143 133751 default_sa.go:34] waiting for default service account to be created ...
I0920 17:47:53.040486 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:53.135991 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:53.198974 133751 default_sa.go:45] found service account: "default"
I0920 17:47:53.199075 133751 default_sa.go:55] duration metric: took 183.916403ms for default service account to be created ...
I0920 17:47:53.199102 133751 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 17:47:53.323395 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:53.413007 133751 system_pods.go:86] 17 kube-system pods found
I0920 17:47:53.413138 133751 system_pods.go:89] "coredns-7c65d6cfc9-8m4r5" [2ad34c47-3125-49a2-b5b9-7d3fb4f5f2be] Running
I0920 17:47:53.413206 133751 system_pods.go:89] "csi-hostpath-attacher-0" [7d2cf5dc-13a2-4dd4-953e-1c451c0e76d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 17:47:53.413300 133751 system_pods.go:89] "csi-hostpath-resizer-0" [f47ddf26-747e-4b3e-b0aa-8f7752155c00] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 17:47:53.413387 133751 system_pods.go:89] "csi-hostpathplugin-j2jxb" [7004f3b1-e6b5-416d-8325-3ebf8c0a1edb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 17:47:53.413450 133751 system_pods.go:89] "etcd-addons-829722" [f4c6df2a-4926-4b48-9563-49225506df99] Running
I0920 17:47:53.413492 133751 system_pods.go:89] "kube-apiserver-addons-829722" [ba220068-739c-4020-a370-25c8cd972afb] Running
I0920 17:47:53.413524 133751 system_pods.go:89] "kube-controller-manager-addons-829722" [adcf0132-3b0e-4a41-a28b-ba5f1223e995] Running
I0920 17:47:53.413601 133751 system_pods.go:89] "kube-ingress-dns-minikube" [850c5f05-a4ac-4fee-b47a-15c690003e94] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0920 17:47:53.413650 133751 system_pods.go:89] "kube-proxy-22p56" [1eba3420-d573-4cdf-becb-9ebb1b52f030] Running
I0920 17:47:53.413680 133751 system_pods.go:89] "kube-scheduler-addons-829722" [24fb3677-ac3f-4ebb-b100-6ca09bf8ac08] Running
I0920 17:47:53.413735 133751 system_pods.go:89] "metrics-server-84c5f94fbc-ddfdg" [a06cdc89-bb46-4422-89dc-f48d20266ea4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 17:47:53.413816 133751 system_pods.go:89] "nvidia-device-plugin-daemonset-qtldp" [165d6890-5abd-46d5-a11f-63d3c593796d] Running
I0920 17:47:53.413848 133751 system_pods.go:89] "registry-66c9cd494c-442hl" [12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 17:47:53.413893 133751 system_pods.go:89] "registry-proxy-l52wr" [24cb7649-58a6-4012-827b-a27d68665a07] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 17:47:53.413961 133751 system_pods.go:89] "snapshot-controller-56fcc65765-b77hh" [47c06696-26b7-48af-98fc-5618bf5400e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 17:47:53.414015 133751 system_pods.go:89] "snapshot-controller-56fcc65765-cprfs" [911f2463-fca0-45d4-904c-dd1af31feecc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 17:47:53.414080 133751 system_pods.go:89] "storage-provisioner" [b30969c8-04f0-4e32-a6a3-40f86b8144f8] Running
I0920 17:47:53.414167 133751 system_pods.go:126] duration metric: took 215.043236ms to wait for k8s-apps to be running ...
I0920 17:47:53.414214 133751 system_svc.go:44] waiting for kubelet service to be running ....
I0920 17:47:53.414353 133751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0920 17:47:53.445303 133751 system_svc.go:56] duration metric: took 31.077653ms WaitForService to wait for kubelet
I0920 17:47:53.445346 133751 kubeadm.go:582] duration metric: took 41.206821468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 17:47:53.445375 133751 node_conditions.go:102] verifying NodePressure condition ...
I0920 17:47:53.539876 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:53.601145 133751 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
I0920 17:47:53.601187 133751 node_conditions.go:123] node cpu capacity is 2
I0920 17:47:53.601204 133751 node_conditions.go:105] duration metric: took 155.822575ms to run NodePressure ...
I0920 17:47:53.601222 133751 start.go:241] waiting for startup goroutines ...
I0920 17:47:53.638767 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:53.824693 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:54.040652 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:54.139388 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:54.324056 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:54.539518 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:54.654280 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:54.824256 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:55.040432 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:55.136176 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:55.333069 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:55.539392 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:55.637492 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:55.824603 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:56.041660 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:56.137680 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:56.325009 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:56.783584 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:56.783702 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:57.039986 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:57.042688 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:57.497378 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:57.501872 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:57.538237 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:57.636806 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:57.826955 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:58.039956 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:58.157329 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:58.346578 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:58.544572 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:58.641733 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:58.863465 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:59.044624 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:59.139790 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:59.334830 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:47:59.607514 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:47:59.638271 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:47:59.825618 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:00.041839 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:00.141357 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:00.363479 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:00.566080 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:00.641959 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:00.844813 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:01.050626 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:01.140390 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:01.509468 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:01.611608 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:01.637653 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:01.837023 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:02.038378 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:02.136016 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:02.325210 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:02.555865 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:02.643565 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:02.844318 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:03.048489 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:03.190566 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:03.341551 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:03.556788 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:03.641778 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:03.827591 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:04.040480 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:04.137521 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:04.333174 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:04.543283 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:04.641698 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:04.823031 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:05.038703 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:05.136476 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:05.326688 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:05.539206 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:05.638510 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:05.822849 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:06.066249 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:06.205693 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:06.324827 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:06.542098 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:06.656229 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:06.839238 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:07.051772 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:07.139655 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:07.328816 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:07.540613 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:07.637592 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:07.822990 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:08.039387 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:08.136771 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:08.359393 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:08.537597 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:08.635678 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:08.822277 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:09.037555 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:09.368809 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:09.370500 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:09.576211 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:09.636200 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 17:48:09.825809 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:10.038020 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:10.137316 133751 kapi.go:107] duration metric: took 31.007037956s to wait for kubernetes.io/minikube-addons=registry ...
I0920 17:48:10.325117 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:10.538545 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:10.825071 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:11.037823 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:11.338359 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:11.546374 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:11.823127 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:12.039158 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:12.331581 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:12.545925 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:12.828556 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:13.044073 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:13.340802 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:13.537879 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:13.822884 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:14.046661 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:14.375659 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:14.538201 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:14.821489 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:15.037918 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:15.325649 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:15.541912 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:15.836902 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:16.067895 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:16.390954 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:16.562766 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:16.825956 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:17.041033 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:17.339590 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:17.632756 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:17.824456 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:18.052889 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:18.325973 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:18.546287 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:18.823448 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:19.038022 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:19.352388 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:19.540602 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:19.828983 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:20.043203 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:20.323548 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:20.594337 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:20.866393 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:21.038197 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:21.323719 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:21.550577 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:21.873646 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:22.052471 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:22.326185 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:22.544731 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:22.831152 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:23.051736 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:23.332482 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:23.551537 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:23.829326 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:24.122103 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:24.321648 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:24.540961 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:24.823200 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:25.085829 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:25.345912 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:25.540898 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:25.833159 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:26.042676 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:26.350584 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:26.556306 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:26.876375 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:27.090224 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:27.323610 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:27.537613 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:27.886759 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:28.045832 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:28.329271 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:28.542720 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:29.022392 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:29.094677 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:29.340991 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:29.546238 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:29.851147 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:30.069552 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:30.324473 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:30.633678 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:30.854095 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:31.037492 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:31.323741 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:31.540694 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:31.842625 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:32.062869 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:32.328998 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:32.544063 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:32.824511 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:33.081506 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:33.337175 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:33.550635 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:33.834629 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:34.041060 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:34.335624 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:34.749427 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:34.826374 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:35.037834 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:35.331647 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:35.540175 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:35.824395 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:36.043296 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:36.322780 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:36.539984 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:36.857090 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:37.038841 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:37.338967 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:37.537473 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:37.826918 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:38.037563 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:38.368344 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:38.584969 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:38.874250 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:39.045382 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:39.352573 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:39.545363 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:39.862136 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:40.057518 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:40.380369 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:40.569914 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:40.829018 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:41.211386 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:41.323022 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:41.595417 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:41.825065 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:42.076993 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:42.326298 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:42.548371 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:42.837263 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:43.043585 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:43.361335 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:43.555177 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:43.928805 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:44.073152 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:44.333762 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:44.603245 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:44.824834 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:45.042091 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:45.325964 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:45.556303 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:45.822670 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:46.043414 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:46.332901 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:46.541123 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:46.830802 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:47.039919 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:47.323900 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:47.589347 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:47.825539 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:48.041621 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:48.324639 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:48.551458 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:48.839764 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:49.045372 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:49.328332 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:49.691824 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:50.348288 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:50.349358 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:50.356227 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:50.604945 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:50.824702 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:51.048318 133751 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 17:48:51.351315 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:51.570237 133751 kapi.go:107] duration metric: took 1m15.038773863s to wait for app.kubernetes.io/name=ingress-nginx ...
I0920 17:48:51.823513 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:52.322307 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:52.848654 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:53.339246 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:53.947790 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:54.323473 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:54.828254 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:55.323380 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:55.823441 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:56.323819 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 17:48:56.822955 133751 kapi.go:107] duration metric: took 1m16.507556552s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 17:49:07.460598 133751 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 17:49:07.460630 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:07.957326 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:08.457607 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:08.957192 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:09.457073 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:09.956521 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:10.457245 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:10.957383 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:11.456464 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:11.957025 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:12.457333 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:12.957603 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:13.457132 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:13.957379 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:14.458493 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:14.960863 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:15.456752 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:15.957983 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:16.458103 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:16.957250 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:17.458098 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:17.957768 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:18.458001 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:18.958210 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:19.457749 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:19.957255 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:20.456875 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:20.957194 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:21.457685 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:21.957083 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:22.458843 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:22.957250 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:23.456935 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:23.957201 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:24.457424 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:24.958245 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:25.457383 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:25.956842 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:26.456958 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:26.958232 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:27.457526 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:27.958001 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:28.456931 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:28.957204 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:29.501429 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:29.957180 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:30.465583 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:30.958334 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:31.458969 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:31.956814 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:32.457266 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:32.957070 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:33.457104 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:33.957228 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:34.457399 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:34.957234 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:35.457347 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:35.957776 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:36.457773 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:36.957485 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:37.456812 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:37.959583 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:38.457067 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:38.957427 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:39.457967 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:39.957114 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:40.457406 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:40.957378 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:41.457074 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:41.957018 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:42.456694 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:42.957236 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:43.457288 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:43.957370 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:44.456456 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:44.957778 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:45.456857 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:45.960096 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:46.457078 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:46.957231 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:47.457352 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:47.958582 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:48.458749 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:48.974744 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:49.457601 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:49.959323 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:50.457573 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:50.957860 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:51.457305 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:51.956789 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:52.457995 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:52.957680 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:53.457669 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:53.956351 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:54.457493 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:54.957439 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:55.459027 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:55.958196 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:56.457640 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:56.956540 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:57.457337 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:57.957542 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:58.457848 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:58.956737 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:59.456930 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:49:59.958764 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:00.456950 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:00.957605 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:01.457299 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:01.957256 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:02.458314 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:02.957810 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:03.458392 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:03.957852 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:04.466136 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:04.956477 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:05.457932 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:05.960395 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:06.457701 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:06.956334 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:07.471314 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:07.956950 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:08.457892 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:08.957151 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:09.457896 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:09.956949 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:10.457382 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:10.956332 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:11.457998 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:11.956951 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:12.459309 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:12.960366 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:13.462393 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:14.087928 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:14.459034 133751 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 17:50:14.956902 133751 kapi.go:107] duration metric: took 2m30.004607365s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 17:50:14.960669 133751 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-829722 cluster.
I0920 17:50:14.963565 133751 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 17:50:14.966907 133751 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 17:50:14.973602 133751 out.go:177] * Enabled addons: ingress-dns, volcano, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I0920 17:50:14.976358 133751 addons.go:510] duration metric: took 3m2.73712078s for enable addons: enabled=[ingress-dns volcano nvidia-device-plugin storage-provisioner cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I0920 17:50:14.976513 133751 start.go:246] waiting for cluster config update ...
I0920 17:50:14.976616 133751 start.go:255] writing updated cluster config ...
I0920 17:50:14.977170 133751 ssh_runner.go:195] Run: rm -f paused
I0920 17:50:15.568438 133751 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 17:50:15.572063 133751 out.go:177] * Done! kubectl is now configured to use "addons-829722" cluster and "default" namespace by default
==> Docker <==
Sep 20 17:59:41 addons-829722 dockerd[1163]: time="2024-09-20T17:59:41.814831038Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 17:59:41 addons-829722 dockerd[1163]: time="2024-09-20T17:59:41.815102642Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 17:59:41 addons-829722 dockerd[1163]: time="2024-09-20T17:59:41.825719175Z" level=error msg="Error running exec 0974fafff7c0ee114de901d5e9131c47861476eccd511c38e7a47a803f1e8c59 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 20 17:59:41 addons-829722 dockerd[1163]: time="2024-09-20T17:59:41.843591077Z" level=info msg="ignoring event" container=23647d3ccbf3fc3ad76aeb829789f93069e585370c6f4afe1b1c82edeb2fd61f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:44 addons-829722 dockerd[1163]: time="2024-09-20T17:59:44.957036152Z" level=info msg="ignoring event" container=baade27d6328b3b666010fd9a132105c163e46e50cc690040f7d235900d99c5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:45 addons-829722 dockerd[1163]: time="2024-09-20T17:59:45.161240447Z" level=info msg="ignoring event" container=281a89418d6da83455d23afbe577975a0fc457f93595faffaa4f704c6d2424f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:46 addons-829722 cri-dockerd[1419]: time="2024-09-20T17:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd47993d3df802ab44dd6273657fbf82796f1fbc0b366fb18c2439f04b871ad7/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
Sep 20 17:59:46 addons-829722 dockerd[1163]: time="2024-09-20T17:59:46.381830162Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 20 17:59:47 addons-829722 cri-dockerd[1419]: time="2024-09-20T17:59:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Sep 20 17:59:47 addons-829722 dockerd[1163]: time="2024-09-20T17:59:47.394379500Z" level=info msg="ignoring event" container=c473b0c90bf91d6a5138380e304dd4be10682552a1b4d02221347fda4fabddd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:48 addons-829722 dockerd[1163]: time="2024-09-20T17:59:48.976338412Z" level=info msg="ignoring event" container=dd47993d3df802ab44dd6273657fbf82796f1fbc0b366fb18c2439f04b871ad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:51 addons-829722 cri-dockerd[1419]: time="2024-09-20T17:59:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/399d96613279c57e61e4ed0b606c9571de26d5a4cd1e42d7cf122114ea48a705/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
Sep 20 17:59:52 addons-829722 cri-dockerd[1419]: time="2024-09-20T17:59:52Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
Sep 20 17:59:52 addons-829722 dockerd[1163]: time="2024-09-20T17:59:52.356420018Z" level=info msg="ignoring event" container=92474da861e0a848b7861bafd84d014201534a055c045a2a1940be00213759d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:54 addons-829722 dockerd[1163]: time="2024-09-20T17:59:54.355682258Z" level=info msg="ignoring event" container=399d96613279c57e61e4ed0b606c9571de26d5a4cd1e42d7cf122114ea48a705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:56 addons-829722 cri-dockerd[1419]: time="2024-09-20T17:59:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a23b5bf8cb921c67c805662153f0848e6c8531fd3207df65b0afa0c203da4f60/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
Sep 20 17:59:56 addons-829722 dockerd[1163]: time="2024-09-20T17:59:56.233091089Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 20 17:59:56 addons-829722 dockerd[1163]: time="2024-09-20T17:59:56.235973862Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 20 17:59:56 addons-829722 dockerd[1163]: time="2024-09-20T17:59:56.527070690Z" level=info msg="ignoring event" container=0804fe88939e2a4120fc131255fbd88a9a8a2adb697fe14d3ce3c72f46ea9fb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 17:59:58 addons-829722 dockerd[1163]: time="2024-09-20T17:59:58.627520551Z" level=info msg="ignoring event" container=a23b5bf8cb921c67c805662153f0848e6c8531fd3207df65b0afa0c203da4f60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 18:00:17 addons-829722 dockerd[1163]: time="2024-09-20T18:00:17.131896041Z" level=info msg="ignoring event" container=bb06a0d15f359fa2911145b9ed9c5ac3f93f3faf78cbef1aca84092c6b551ae6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 18:00:18 addons-829722 dockerd[1163]: time="2024-09-20T18:00:18.309483514Z" level=info msg="ignoring event" container=670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 18:00:18 addons-829722 dockerd[1163]: time="2024-09-20T18:00:18.418318053Z" level=info msg="ignoring event" container=a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 18:00:18 addons-829722 dockerd[1163]: time="2024-09-20T18:00:18.712859182Z" level=info msg="ignoring event" container=32628b742daff54ef03c6fcae4a2ae5fd6ec891fc73342c0ae0cff4efaf6ec96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 18:00:18 addons-829722 dockerd[1163]: time="2024-09-20T18:00:18.943050926Z" level=info msg="ignoring event" container=27168b495730f5448f76a171d336ae5774dfee2a6716264ca224a3a08969bbfb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
23647d3ccbf3f ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 40 seconds ago Exited gadget 7 c9b24682e2267 gadget-rsrqf
88e23afca6ccb gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 121c4969da9e6 gcp-auth-89d5ffd79-sx9cl
494a534884ae7 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 11 minutes ago Running csi-snapshotter 0 71313a2a3337f csi-hostpathplugin-j2jxb
ed5eabc3afedd registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 11 minutes ago Running csi-provisioner 0 71313a2a3337f csi-hostpathplugin-j2jxb
cad0032376cbc registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 11 minutes ago Running liveness-probe 0 71313a2a3337f csi-hostpathplugin-j2jxb
521e19a46cad5 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 a05f0af4a8a11 ingress-nginx-controller-bc57996ff-5bj5q
d2205052aa8dd registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 11 minutes ago Running hostpath 0 71313a2a3337f csi-hostpathplugin-j2jxb
c1ff021b7fdff registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 11 minutes ago Running node-driver-registrar 0 71313a2a3337f csi-hostpathplugin-j2jxb
9a0d1dfb34a2e registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 11 minutes ago Running csi-resizer 0 788afac856a40 csi-hostpath-resizer-0
9280a1b370e5f registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 11 minutes ago Running csi-external-health-monitor-controller 0 71313a2a3337f csi-hostpathplugin-j2jxb
08711de6b7e3d registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 11 minutes ago Running csi-attacher 0 0978294bb9bda csi-hostpath-attacher-0
b81131dcf4952 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 eec9ff376f46a ingress-nginx-admission-patch-wg52m
219b39eac1318 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 17f87c3e5d42d ingress-nginx-admission-create-nm9vm
db16663f36ace registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 11 minutes ago Running volume-snapshot-controller 0 a13453920b1b3 snapshot-controller-56fcc65765-b77hh
1e75b7c19ad0b registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 11 minutes ago Running volume-snapshot-controller 0 409c88560388d snapshot-controller-56fcc65765-cprfs
ae46e017916eb rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 d03a1802a975c local-path-provisioner-86d989889c-52mpq
4dbc0e2b34dce registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 c977f571599ab metrics-server-84c5f94fbc-ddfdg
075a4629c398e gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 7d81bbb7d2afb kube-ingress-dns-minikube
84aa018391cdf gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 12 minutes ago Running cloud-spanner-emulator 0 fe7a123509f6e cloud-spanner-emulator-5b584cc74-rxdwr
aeef481ba152b 6e38f40d628db 12 minutes ago Running storage-provisioner 0 dd7fd041bb242 storage-provisioner
677fc7dc70211 c69fa2e9cbf5f 13 minutes ago Running coredns 0 4d4cb5e1ea604 coredns-7c65d6cfc9-8m4r5
adfbb9157753c 60c005f310ff3 13 minutes ago Running kube-proxy 0 eaba8729d997f kube-proxy-22p56
9360adb1cb138 175ffd71cce3d 13 minutes ago Running kube-controller-manager 0 be51dbfb783db kube-controller-manager-addons-829722
bdcaac566afc6 2e96e5913fc06 13 minutes ago Running etcd 0 e3c826ea429e0 etcd-addons-829722
d22fbef53b639 9aa1fad941575 13 minutes ago Running kube-scheduler 0 7e907054cce7b kube-scheduler-addons-829722
d61034f8cde4c 6bab7719df100 13 minutes ago Running kube-apiserver 0 f4f72f395d1a5 kube-apiserver-addons-829722
==> controller_ingress [521e19a46cad] <==
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
-------------------------------------------------------------------------------
W0920 17:48:50.977341 7 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0920 17:48:50.978229 7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0920 17:48:50.988001 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
I0920 17:48:51.524120 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0920 17:48:51.562188 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0920 17:48:51.591872 7 nginx.go:271] "Starting NGINX Ingress controller"
I0920 17:48:51.640078 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f46e71a2-f3ed-4a45-aa25-06a393d9943c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0920 17:48:51.647343 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"fbc91651-aa36-4640-a936-c502dd61ed56", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0920 17:48:51.647416 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"864eda3b-b230-4503-90c3-293af5894922", APIVersion:"v1", ResourceVersion:"734", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0920 17:48:52.796081 7 nginx.go:317] "Starting NGINX process"
I0920 17:48:52.796316 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0920 17:48:52.803588 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0920 17:48:52.804638 7 controller.go:193] "Configuration changes detected, backend reload required"
I0920 17:48:52.811569 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0920 17:48:52.825782 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-5bj5q"
I0920 17:48:52.942749 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-5bj5q" node="addons-829722"
I0920 17:48:53.073958 7 controller.go:213] "Backend successfully reloaded"
I0920 17:48:53.074195 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0920 17:48:53.075900 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-5bj5q", UID:"aac5aa86-c2df-443b-baf9-a23dc743e9c5", APIVersion:"v1", ResourceVersion:"1273", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [677fc7dc7021] <==
[INFO] 10.244.0.8:41409 - 60458 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00021052s
[INFO] 10.244.0.8:60682 - 59333 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009005s
[INFO] 10.244.0.8:60682 - 3273 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000199609s
[INFO] 10.244.0.8:45899 - 22408 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000084926s
[INFO] 10.244.0.8:45899 - 41875 "A IN registry.kube-system.svc.cluster.local.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000141862s
[INFO] 10.244.0.8:50848 - 9788 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000139627s
[INFO] 10.244.0.8:50848 - 3120 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000079148s
[INFO] 10.244.0.8:40888 - 26539 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000140647s
[INFO] 10.244.0.8:40888 - 36783 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000076826s
[INFO] 10.244.0.8:38608 - 5908 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000086833s
[INFO] 10.244.0.8:38608 - 5929 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113851s
[INFO] 10.244.0.25:40051 - 49895 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000960162s
[INFO] 10.244.0.25:44941 - 63836 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000406583s
[INFO] 10.244.0.25:55076 - 6565 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020811s
[INFO] 10.244.0.25:56339 - 46804 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000250471s
[INFO] 10.244.0.25:55022 - 44413 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136699s
[INFO] 10.244.0.25:59041 - 12602 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.002791736s
[INFO] 10.244.0.25:35772 - 34232 "AAAA IN storage.googleapis.com.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.006732381s
[INFO] 10.244.0.25:56825 - 28438 "A IN storage.googleapis.com.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.006110164s
[INFO] 10.244.0.25:52121 - 58149 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003524221s
[INFO] 10.244.0.25:44458 - 32739 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.004198017s
[INFO] 10.244.0.25:38496 - 4315 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003836038s
[INFO] 10.244.0.25:54101 - 29271 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004630833s
[INFO] 10.244.0.25:48488 - 65030 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00390203s
[INFO] 10.244.0.25:56613 - 40518 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005012507s
==> describe nodes <==
Name: addons-829722
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-829722
kubernetes.io/os=linux
minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
minikube.k8s.io/name=addons-829722
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T17_47_07_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-829722
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-829722"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 17:47:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-829722
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 18:00:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 18:00:12 +0000 Fri, 20 Sep 2024 17:47:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 18:00:12 +0000 Fri, 20 Sep 2024 17:47:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 18:00:12 +0000 Fri, 20 Sep 2024 17:47:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 18:00:12 +0000 Fri, 20 Sep 2024 17:47:04 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-829722
Capacity:
cpu: 2
ephemeral-storage: 119475748Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8141780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 119475748Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8141780Ki
pods: 110
System Info:
Machine ID: 4bb97bc0fa7a4a20bd070eb8c49d1e9e
System UUID: 2d6f64a5-f861-44f9-8e00-6c93e3d249dc
Boot ID: cf435fdf-f6f8-4ac1-8ac5-fa961184a5a5
Kernel Version: 6.1.100+
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m20s
default cloud-spanner-emulator-5b584cc74-rxdwr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13m
gadget gadget-rsrqf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-sx9cl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-5bj5q 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-8m4r5 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 13m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpathplugin-j2jxb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system etcd-addons-829722 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 13m
kube-system kube-apiserver-addons-829722 250m (12%) 0 (0%) 0 (0%) 0 (0%) 13m
kube-system kube-controller-manager-addons-829722 200m (10%) 0 (0%) 0 (0%) 0 (0%) 13m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-22p56 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13m
kube-system kube-scheduler-addons-829722 100m (5%) 0 (0%) 0 (0%) 0 (0%) 13m
kube-system metrics-server-84c5f94fbc-ddfdg 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-b77hh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-cprfs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-52mpq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (5%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 13m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 13m (x8 over 13m) kubelet Node addons-829722 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m (x8 over 13m) kubelet Node addons-829722 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m (x8 over 13m) kubelet Node addons-829722 status is now: NodeHasSufficientPID
Normal Starting 13m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 13m kubelet Node addons-829722 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13m kubelet Node addons-829722 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13m kubelet Node addons-829722 status is now: NodeHasSufficientPID
Normal RegisteredNode 13m node-controller Node addons-829722 event: Registered Node addons-829722 in Controller
==> dmesg <==
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a ea 66 12 8d 86 08 06
[ +2.081920] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff ba af 89 b7 33 77 08 06
[ +1.558509] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a 12 86 92 ce a2 08 06
[ +0.091006] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 dc 7e fc c0 88 08 06
[ +2.488095] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 13 ac 46 9d b7 08 06
[ +9.653796] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a a5 f2 e5 03 cd 08 06
[ +0.393462] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff ba f1 40 06 ff 59 08 06
[ +0.259451] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 46 a6 52 08 07 08 06
[ +7.210839] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 f1 dd 53 99 08 08 06
[Sep20 17:49] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 89 b7 74 79 0f 08 06
[ +0.455009] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 93 37 88 3f f0 08 06
[Sep20 17:50] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
[ +0.000011] ll header: 00000000: ff ff ff ff ff ff 86 e9 1b 47 86 d1 08 06
[ +0.001144] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 7a c9 9a bb 9b 08 06
==> etcd [bdcaac566afc] <==
{"level":"info","ts":"2024-09-20T17:48:50.338524Z","caller":"traceutil/trace.go:171","msg":"trace[786138600] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"300.69375ms","start":"2024-09-20T17:48:50.037816Z","end":"2024-09-20T17:48:50.338509Z","steps":["trace[786138600] 'agreement among raft nodes before linearized reading' (duration: 299.585402ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:48:50.338722Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:48:50.037763Z","time spent":"300.931364ms","remote":"127.0.0.1:51834","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2024-09-20T17:48:50.339257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.266839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2024-09-20T17:48:50.339938Z","caller":"traceutil/trace.go:171","msg":"trace[1287066274] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1269; }","duration":"314.94104ms","start":"2024-09-20T17:48:50.024979Z","end":"2024-09-20T17:48:50.339920Z","steps":["trace[1287066274] 'agreement among raft nodes before linearized reading' (duration: 314.173353ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:48:50.340157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:48:50.024924Z","time spent":"315.218655ms","remote":"127.0.0.1:51814","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"info","ts":"2024-09-20T17:48:50.596140Z","caller":"traceutil/trace.go:171","msg":"trace[574742754] linearizableReadLoop","detail":"{readStateIndex:1305; appliedIndex:1304; }","duration":"141.463953ms","start":"2024-09-20T17:48:50.454656Z","end":"2024-09-20T17:48:50.596120Z","steps":["trace[574742754] 'read index received' (duration: 141.230273ms)","trace[574742754] 'applied index is now lower than readState.Index' (duration: 233.027µs)"],"step_count":2}
{"level":"info","ts":"2024-09-20T17:48:50.596847Z","caller":"traceutil/trace.go:171","msg":"trace[837201085] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"244.616825ms","start":"2024-09-20T17:48:50.352213Z","end":"2024-09-20T17:48:50.596830Z","steps":["trace[837201085] 'process raft request' (duration: 243.777231ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:48:50.597395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.714669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T17:48:50.597604Z","caller":"traceutil/trace.go:171","msg":"trace[1850136980] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1270; }","duration":"142.947475ms","start":"2024-09-20T17:48:50.454645Z","end":"2024-09-20T17:48:50.597592Z","steps":["trace[1850136980] 'agreement among raft nodes before linearized reading' (duration: 142.689546ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:48:53.942847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.45612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T17:48:53.942914Z","caller":"traceutil/trace.go:171","msg":"trace[92721089] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1292; }","duration":"124.539282ms","start":"2024-09-20T17:48:53.818360Z","end":"2024-09-20T17:48:53.942899Z","steps":["trace[92721089] 'range keys from in-memory index tree' (duration: 124.384708ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:48:55.708939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.63545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
{"level":"info","ts":"2024-09-20T17:48:55.709013Z","caller":"traceutil/trace.go:171","msg":"trace[1683257634] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1298; }","duration":"102.726751ms","start":"2024-09-20T17:48:55.606271Z","end":"2024-09-20T17:48:55.708998Z","steps":["trace[1683257634] 'range keys from in-memory index tree' (duration: 102.429099ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:50:14.079986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.158348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T17:50:14.080332Z","caller":"traceutil/trace.go:171","msg":"trace[831965800] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1487; }","duration":"125.534344ms","start":"2024-09-20T17:50:13.954775Z","end":"2024-09-20T17:50:14.080310Z","steps":["trace[831965800] 'range keys from in-memory index tree' (duration: 125.056638ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T17:50:42.180632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.330593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2024-09-20T17:50:42.181967Z","caller":"traceutil/trace.go:171","msg":"trace[575637107] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1584; }","duration":"199.647353ms","start":"2024-09-20T17:50:41.982266Z","end":"2024-09-20T17:50:42.181913Z","steps":["trace[575637107] 'count revisions from in-memory index tree' (duration: 198.242995ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T17:50:42.433006Z","caller":"traceutil/trace.go:171","msg":"trace[478510910] linearizableReadLoop","detail":"{readStateIndex:1649; appliedIndex:1648; }","duration":"120.220432ms","start":"2024-09-20T17:50:42.312765Z","end":"2024-09-20T17:50:42.432986Z","steps":["trace[478510910] 'read index received' (duration: 119.922634ms)","trace[478510910] 'applied index is now lower than readState.Index' (duration: 296.79µs)"],"step_count":2}
{"level":"warn","ts":"2024-09-20T17:50:42.433468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.752325ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T17:50:42.433739Z","caller":"traceutil/trace.go:171","msg":"trace[208386666] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1585; }","duration":"121.040249ms","start":"2024-09-20T17:50:42.312684Z","end":"2024-09-20T17:50:42.433724Z","steps":["trace[208386666] 'agreement among raft nodes before linearized reading' (duration: 120.67078ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T17:50:42.435025Z","caller":"traceutil/trace.go:171","msg":"trace[138562135] transaction","detail":"{read_only:false; response_revision:1585; number_of_response:1; }","duration":"146.288877ms","start":"2024-09-20T17:50:42.288719Z","end":"2024-09-20T17:50:42.435008Z","steps":["trace[138562135] 'process raft request' (duration: 144.121783ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T17:57:01.608571Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1861}
{"level":"info","ts":"2024-09-20T17:57:01.775962Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1861,"took":"166.365279ms","hash":263239341,"current-db-size-bytes":9134080,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4980736,"current-db-size-in-use":"5.0 MB"}
{"level":"info","ts":"2024-09-20T17:57:01.776023Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":263239341,"revision":1861,"compact-revision":-1}
{"level":"info","ts":"2024-09-20T17:59:12.925735Z","caller":"traceutil/trace.go:171","msg":"trace[1220870792] transaction","detail":"{read_only:false; response_revision:2499; number_of_response:1; }","duration":"171.131456ms","start":"2024-09-20T17:59:12.754544Z","end":"2024-09-20T17:59:12.925675Z","steps":["trace[1220870792] 'process raft request' (duration: 170.918277ms)"],"step_count":1}
==> gcp-auth [88e23afca6cc] <==
2024/09/20 17:50:14 GCP Auth Webhook started!
2024/09/20 17:50:32 Ready to marshal response ...
2024/09/20 17:50:32 Ready to write response ...
2024/09/20 17:50:33 Ready to marshal response ...
2024/09/20 17:50:33 Ready to write response ...
2024/09/20 17:51:00 Ready to marshal response ...
2024/09/20 17:51:00 Ready to write response ...
2024/09/20 17:51:00 Ready to marshal response ...
2024/09/20 17:51:00 Ready to write response ...
2024/09/20 17:51:00 Ready to marshal response ...
2024/09/20 17:51:00 Ready to write response ...
2024/09/20 17:59:06 Ready to marshal response ...
2024/09/20 17:59:06 Ready to write response ...
2024/09/20 17:59:06 Ready to marshal response ...
2024/09/20 17:59:06 Ready to write response ...
2024/09/20 17:59:06 Ready to marshal response ...
2024/09/20 17:59:06 Ready to write response ...
2024/09/20 17:59:16 Ready to marshal response ...
2024/09/20 17:59:16 Ready to write response ...
2024/09/20 17:59:45 Ready to marshal response ...
2024/09/20 17:59:45 Ready to write response ...
2024/09/20 17:59:45 Ready to marshal response ...
2024/09/20 17:59:45 Ready to write response ...
2024/09/20 17:59:55 Ready to marshal response ...
2024/09/20 17:59:55 Ready to write response ...
==> kernel <==
18:00:20 up 1:36, 0 users, load average: 1.58, 1.75, 1.96
Linux addons-829722 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [d61034f8cde4] <==
W0920 17:49:48.118994 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.252.96:443: connect: connection refused
E0920 17:49:48.119048 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.252.96:443: connect: connection refused" logger="UnhandledError"
I0920 17:50:32.930916 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0920 17:50:32.966931 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0920 17:50:50.562817 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0920 17:50:50.670052 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0920 17:50:51.249582 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 17:50:51.373539 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 17:50:51.460771 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0920 17:50:51.743326 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
W0920 17:50:51.882009 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
I0920 17:50:52.198829 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 17:50:52.266415 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 17:50:52.342646 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0920 17:50:52.742476 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0920 17:50:52.846098 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0920 17:50:52.901180 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0920 17:50:53.155679 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0920 17:50:53.343807 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0920 17:50:53.726291 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0920 17:59:06.836141 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.10.63"}
E0920 17:59:56.499575 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
E0920 17:59:56.520482 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
E0920 17:59:56.535991 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
E0920 18:00:11.532242 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
==> kube-controller-manager [9360adb1cb13] <==
W0920 17:59:27.743605 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 17:59:27.743661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 17:59:30.929572 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 17:59:30.929632 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 17:59:31.143020 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
I0920 17:59:32.913264 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.812µs"
I0920 17:59:42.575168 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-829722"
I0920 17:59:43.078937 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
W0920 17:59:44.691870 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 17:59:44.691937 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 17:59:50.192207 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 17:59:50.192296 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 17:59:56.443110 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="13.011µs"
W0920 17:59:57.296551 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 17:59:57.296616 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:00:04.909438 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:00:04.909499 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:00:09.715871 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:00:09.715933 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:00:10.163894 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:00:10.163954 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 18:00:12.830614 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-829722"
W0920 18:00:16.779465 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:00:16.779525 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 18:00:18.185381 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.78µs"
==> kube-proxy [adfbb9157753] <==
I0920 17:47:20.212768 1 server_linux.go:66] "Using iptables proxy"
I0920 17:47:22.501201 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0920 17:47:22.506855 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 17:47:23.420904 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0920 17:47:23.429432 1 server_linux.go:169] "Using iptables Proxier"
I0920 17:47:23.705943 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 17:47:23.725467 1 server.go:483] "Version info" version="v1.31.1"
I0920 17:47:23.725793 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 17:47:23.744612 1 config.go:199] "Starting service config controller"
I0920 17:47:23.744791 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 17:47:23.744908 1 config.go:105] "Starting endpoint slice config controller"
I0920 17:47:23.744967 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 17:47:23.745940 1 config.go:328] "Starting node config controller"
I0920 17:47:23.746033 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 17:47:24.045976 1 shared_informer.go:320] Caches are synced for service config
I0920 17:47:24.458574 1 shared_informer.go:320] Caches are synced for node config
I0920 17:47:24.496849 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [d22fbef53b63] <==
W0920 17:47:04.972955 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 17:47:04.973126 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 17:47:04.983035 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0920 17:47:04.983205 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.039995 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0920 17:47:05.040361 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.052371 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0920 17:47:05.052564 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.098610 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0920 17:47:05.098673 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.161275 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 17:47:05.161642 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0920 17:47:05.279099 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0920 17:47:05.279432 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.279751 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0920 17:47:05.279939 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.328946 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 17:47:05.329287 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.464906 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 17:47:05.465268 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.471632 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0920 17:47:05.471941 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 17:47:05.553507 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 17:47:05.553840 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0920 17:47:07.418743 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 20 18:00:07 addons-829722 kubelet[2199]: I0920 18:00:07.989056 2199 scope.go:117] "RemoveContainer" containerID="c473b0c90bf91d6a5138380e304dd4be10682552a1b4d02221347fda4fabddd1"
Sep 20 18:00:08 addons-829722 kubelet[2199]: I0920 18:00:08.016563 2199 scope.go:117] "RemoveContainer" containerID="0804fe88939e2a4120fc131255fbd88a9a8a2adb697fe14d3ce3c72f46ea9fb6"
Sep 20 18:00:13 addons-829722 kubelet[2199]: I0920 18:00:13.164318 2199 scope.go:117] "RemoveContainer" containerID="23647d3ccbf3fc3ad76aeb829789f93069e585370c6f4afe1b1c82edeb2fd61f"
Sep 20 18:00:13 addons-829722 kubelet[2199]: E0920 18:00:13.164618 2199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rsrqf_gadget(25179e3a-58d1-4fd9-8f93-4c956fc86855)\"" pod="gadget/gadget-rsrqf" podUID="25179e3a-58d1-4fd9-8f93-4c956fc86855"
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.286980 2199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1155daed-97ac-4c99-85fd-8c60032c80be-gcp-creds\") pod \"1155daed-97ac-4c99-85fd-8c60032c80be\" (UID: \"1155daed-97ac-4c99-85fd-8c60032c80be\") "
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.287085 2199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghd2j\" (UniqueName: \"kubernetes.io/projected/1155daed-97ac-4c99-85fd-8c60032c80be-kube-api-access-ghd2j\") pod \"1155daed-97ac-4c99-85fd-8c60032c80be\" (UID: \"1155daed-97ac-4c99-85fd-8c60032c80be\") "
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.287801 2199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1155daed-97ac-4c99-85fd-8c60032c80be-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1155daed-97ac-4c99-85fd-8c60032c80be" (UID: "1155daed-97ac-4c99-85fd-8c60032c80be"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.297798 2199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1155daed-97ac-4c99-85fd-8c60032c80be-kube-api-access-ghd2j" (OuterVolumeSpecName: "kube-api-access-ghd2j") pod "1155daed-97ac-4c99-85fd-8c60032c80be" (UID: "1155daed-97ac-4c99-85fd-8c60032c80be"). InnerVolumeSpecName "kube-api-access-ghd2j". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.387487 2199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ghd2j\" (UniqueName: \"kubernetes.io/projected/1155daed-97ac-4c99-85fd-8c60032c80be-kube-api-access-ghd2j\") on node \"addons-829722\" DevicePath \"\""
Sep 20 18:00:17 addons-829722 kubelet[2199]: I0920 18:00:17.387566 2199 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1155daed-97ac-4c99-85fd-8c60032c80be-gcp-creds\") on node \"addons-829722\" DevicePath \"\""
Sep 20 18:00:18 addons-829722 kubelet[2199]: I0920 18:00:18.903851 2199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5wtx\" (UniqueName: \"kubernetes.io/projected/12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add-kube-api-access-m5wtx\") pod \"12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add\" (UID: \"12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add\") "
Sep 20 18:00:18 addons-829722 kubelet[2199]: I0920 18:00:18.909883 2199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add-kube-api-access-m5wtx" (OuterVolumeSpecName: "kube-api-access-m5wtx") pod "12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add" (UID: "12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add"). InnerVolumeSpecName "kube-api-access-m5wtx". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.005035 2199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m5wtx\" (UniqueName: \"kubernetes.io/projected/12abb1a3-4b6b-4fe1-a6a7-6fe2cf0f3add-kube-api-access-m5wtx\") on node \"addons-829722\" DevicePath \"\""
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.092338 2199 scope.go:117] "RemoveContainer" containerID="670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.105486 2199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6h57\" (UniqueName: \"kubernetes.io/projected/24cb7649-58a6-4012-827b-a27d68665a07-kube-api-access-g6h57\") pod \"24cb7649-58a6-4012-827b-a27d68665a07\" (UID: \"24cb7649-58a6-4012-827b-a27d68665a07\") "
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.115692 2199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cb7649-58a6-4012-827b-a27d68665a07-kube-api-access-g6h57" (OuterVolumeSpecName: "kube-api-access-g6h57") pod "24cb7649-58a6-4012-827b-a27d68665a07" (UID: "24cb7649-58a6-4012-827b-a27d68665a07"). InnerVolumeSpecName "kube-api-access-g6h57". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.141053 2199 scope.go:117] "RemoveContainer" containerID="670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b"
Sep 20 18:00:19 addons-829722 kubelet[2199]: E0920 18:00:19.142953 2199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b" containerID="670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.143033 2199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b"} err="failed to get container status \"670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 670f51ba5cf60e7ccf6d97bb5b6f2f57e179c3d62c7312c0ebe011e51689cd6b"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.143072 2199 scope.go:117] "RemoveContainer" containerID="a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.173790 2199 scope.go:117] "RemoveContainer" containerID="a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950"
Sep 20 18:00:19 addons-829722 kubelet[2199]: E0920 18:00:19.175232 2199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950" containerID="a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.175296 2199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950"} err="failed to get container status \"a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950\": rpc error: code = Unknown desc = Error response from daemon: No such container: a506ff67bf0db2f2f67d9b525dfb201793ce60ad8cb4dcbaf375271675868950"
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.214026 2199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g6h57\" (UniqueName: \"kubernetes.io/projected/24cb7649-58a6-4012-827b-a27d68665a07-kube-api-access-g6h57\") on node \"addons-829722\" DevicePath \"\""
Sep 20 18:00:19 addons-829722 kubelet[2199]: I0920 18:00:19.250770 2199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1155daed-97ac-4c99-85fd-8c60032c80be" path="/var/lib/kubelet/pods/1155daed-97ac-4c99-85fd-8c60032c80be/volumes"
==> storage-provisioner [aeef481ba152] <==
I0920 17:47:29.609622 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 17:47:29.990235 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 17:47:29.990438 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 17:47:30.338951 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 17:47:30.347436 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee73c386-1a2c-4207-85a9-f70f7ea44a68", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-829722_a4e2e519-4010-4b82-ae99-43521b113507 became leader
I0920 17:47:30.368320 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-829722_a4e2e519-4010-4b82-ae99-43521b113507!
I0920 17:47:30.674349 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-829722_a4e2e519-4010-4b82-ae99-43521b113507!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-829722 -n addons-829722
helpers_test.go:261: (dbg) Run: kubectl --context addons-829722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-nm9vm ingress-nginx-admission-patch-wg52m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-829722 describe pod busybox ingress-nginx-admission-create-nm9vm ingress-nginx-admission-patch-wg52m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-829722 describe pod busybox ingress-nginx-admission-create-nm9vm ingress-nginx-admission-patch-wg52m: exit status 1 (114.585457ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-829722/192.168.49.2
Start Time: Fri, 20 Sep 2024 17:51:00 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6xn2x (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-6xn2x:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m21s default-scheduler Successfully assigned default/busybox to addons-829722
Warning Failed 7m59s (x6 over 9m19s) kubelet Error: ImagePullBackOff
Normal Pulling 7m47s (x4 over 9m20s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m47s (x4 over 9m20s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m47s (x4 over 9m20s) kubelet Error: ErrImagePull
Normal BackOff 4m19s (x21 over 9m19s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-nm9vm" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-wg52m" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-829722 describe pod busybox ingress-nginx-admission-create-nm9vm ingress-nginx-admission-patch-wg52m: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.28s)