=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.786405ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004011068s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006764871s
addons_test.go:342: (dbg) Run: kubectl --context addons-837740 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126333937s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-837740 ip
2024/09/15 06:43:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-837740 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-837740
helpers_test.go:235: (dbg) docker inspect addons-837740:
-- stdout --
[
{
"Id": "b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90",
"Created": "2024-09-15T06:30:20.754532867Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8917,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-15T06:30:20.930143613Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
"ResolvConfPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/hostname",
"HostsPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/hosts",
"LogPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90-json.log",
"Name": "/addons-837740",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-837740:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-837740",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005-init/diff:/var/lib/docker/overlay2/a44563b42d4442f369c0c7152703f9a3fe2e4fcbab25a6b8f520f3ba6cd0cdaf/diff",
"MergedDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/merged",
"UpperDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/diff",
"WorkDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-837740",
"Source": "/var/lib/docker/volumes/addons-837740/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-837740",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-837740",
"name.minikube.sigs.k8s.io": "addons-837740",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4fdff80516785e368972eb44dce6aab88731e1e9932522c37146ca661e167557",
"SandboxKey": "/var/run/docker/netns/4fdff8051678",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-837740": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "3c2646c3e2640b3b16e2bdca21d65eb739c40c2a638d9e84c3615750ebd4fc28",
"EndpointID": "d1995260dbcefbe0765216f1fab559d993dd49f3d904b91c0727c39758debdaa",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-837740",
"b3ba6fbaf9bc"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-837740 -n addons-837740
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-837740 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 logs -n 25: (1.531392739s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | -p download-only-221568 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| delete | -p download-only-221568 | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| start | -o=json --download-only | download-only-157916 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | -p download-only-157916 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| delete | -p download-only-157916 | download-only-157916 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| delete | -p download-only-221568 | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| delete | -p download-only-157916 | download-only-157916 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| start | --download-only -p | download-docker-771311 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | download-docker-771311 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-771311 | download-docker-771311 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| start | --download-only -p | binary-mirror-730073 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | binary-mirror-730073 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:35331 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-730073 | binary-mirror-730073 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
| addons | disable dashboard -p | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | addons-837740 | | | | | |
| addons | enable dashboard -p | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | |
| | addons-837740 | | | | | |
| start | -p addons-837740 --wait=true | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:33 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-837740 addons disable | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:34 UTC | 15 Sep 24 06:34 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-837740 addons | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-837740 addons | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-837740 addons | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| | addons-837740 | | | | | |
| ip | addons-837740 ip | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| addons | addons-837740 addons disable | addons-837740 | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/15 06:29:55
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 06:29:55.490756 8422 out.go:345] Setting OutFile to fd 1 ...
I0915 06:29:55.490925 8422 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:29:55.490952 8422 out.go:358] Setting ErrFile to fd 2...
I0915 06:29:55.490971 8422 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:29:55.491251 8422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:29:55.491730 8422 out.go:352] Setting JSON to false
I0915 06:29:55.492494 8422 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":747,"bootTime":1726381048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0915 06:29:55.492593 8422 start.go:139] virtualization:
I0915 06:29:55.495273 8422 out.go:177] * [addons-837740] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0915 06:29:55.497782 8422 out.go:177] - MINIKUBE_LOCATION=19644
I0915 06:29:55.497906 8422 notify.go:220] Checking for updates...
I0915 06:29:55.502029 8422 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0915 06:29:55.504321 8422 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
I0915 06:29:55.506391 8422 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
I0915 06:29:55.508477 8422 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0915 06:29:55.510878 8422 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0915 06:29:55.513296 8422 driver.go:394] Setting default libvirt URI to qemu:///system
I0915 06:29:55.533774 8422 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0915 06:29:55.533903 8422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:29:55.595604 8422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:55.586000671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:29:55.595718 8422 docker.go:318] overlay module found
I0915 06:29:55.598109 8422 out.go:177] * Using the docker driver based on user configuration
I0915 06:29:55.600180 8422 start.go:297] selected driver: docker
I0915 06:29:55.600196 8422 start.go:901] validating driver "docker" against <nil>
I0915 06:29:55.600210 8422 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0915 06:29:55.600878 8422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:29:55.651426 8422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:55.642528615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:29:55.651665 8422 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0915 06:29:55.651893 8422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0915 06:29:55.653727 8422 out.go:177] * Using Docker driver with root privileges
I0915 06:29:55.655832 8422 cni.go:84] Creating CNI manager for ""
I0915 06:29:55.655909 8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0915 06:29:55.655921 8422 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0915 06:29:55.656005 8422 start.go:340] cluster config:
{Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0915 06:29:55.658264 8422 out.go:177] * Starting "addons-837740" primary control-plane node in "addons-837740" cluster
I0915 06:29:55.659987 8422 cache.go:121] Beginning downloading kic base image for docker with docker
I0915 06:29:55.662210 8422 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
I0915 06:29:55.664265 8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0915 06:29:55.664281 8422 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
I0915 06:29:55.664316 8422 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0915 06:29:55.664332 8422 cache.go:56] Caching tarball of preloaded images
I0915 06:29:55.664407 8422 preload.go:172] Found /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0915 06:29:55.664417 8422 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0915 06:29:55.664782 8422 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json ...
I0915 06:29:55.664802 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json: {Name:mk1fe7961cb83ebea802ec66b791f26a5822ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:29:55.679375 8422 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
I0915 06:29:55.679481 8422 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
I0915 06:29:55.679513 8422 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
I0915 06:29:55.679518 8422 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
I0915 06:29:55.679526 8422 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
I0915 06:29:55.679531 8422 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
I0915 06:30:13.336178 8422 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
I0915 06:30:13.336221 8422 cache.go:194] Successfully downloaded all kic artifacts
I0915 06:30:13.336267 8422 start.go:360] acquireMachinesLock for addons-837740: {Name:mk477b3475122614ef47a52333416900132c8763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 06:30:13.336381 8422 start.go:364] duration metric: took 92.136µs to acquireMachinesLock for "addons-837740"
I0915 06:30:13.336412 8422 start.go:93] Provisioning new machine with config: &{Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0915 06:30:13.336495 8422 start.go:125] createHost starting for "" (driver="docker")
I0915 06:30:13.339008 8422 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0915 06:30:13.339278 8422 start.go:159] libmachine.API.Create for "addons-837740" (driver="docker")
I0915 06:30:13.339313 8422 client.go:168] LocalClient.Create starting
I0915 06:30:13.339442 8422 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem
I0915 06:30:14.329172 8422 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem
I0915 06:30:14.605385 8422 cli_runner.go:164] Run: docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0915 06:30:14.621548 8422 cli_runner.go:211] docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0915 06:30:14.621631 8422 network_create.go:284] running [docker network inspect addons-837740] to gather additional debugging logs...
I0915 06:30:14.621652 8422 cli_runner.go:164] Run: docker network inspect addons-837740
W0915 06:30:14.636610 8422 cli_runner.go:211] docker network inspect addons-837740 returned with exit code 1
I0915 06:30:14.636638 8422 network_create.go:287] error running [docker network inspect addons-837740]: docker network inspect addons-837740: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-837740 not found
I0915 06:30:14.636651 8422 network_create.go:289] output of [docker network inspect addons-837740]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-837740 not found
** /stderr **
I0915 06:30:14.636752 8422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:30:14.657796 8422 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000406a20}
I0915 06:30:14.657841 8422 network_create.go:124] attempt to create docker network addons-837740 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0915 06:30:14.657906 8422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-837740 addons-837740
I0915 06:30:14.729518 8422 network_create.go:108] docker network addons-837740 192.168.49.0/24 created
I0915 06:30:14.729549 8422 kic.go:121] calculated static IP "192.168.49.2" for the "addons-837740" container
I0915 06:30:14.729623 8422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0915 06:30:14.744460 8422 cli_runner.go:164] Run: docker volume create addons-837740 --label name.minikube.sigs.k8s.io=addons-837740 --label created_by.minikube.sigs.k8s.io=true
I0915 06:30:14.761453 8422 oci.go:103] Successfully created a docker volume addons-837740
I0915 06:30:14.761545 8422 cli_runner.go:164] Run: docker run --rm --name addons-837740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --entrypoint /usr/bin/test -v addons-837740:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
I0915 06:30:16.975199 8422 cli_runner.go:217] Completed: docker run --rm --name addons-837740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --entrypoint /usr/bin/test -v addons-837740:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (2.213593066s)
I0915 06:30:16.975230 8422 oci.go:107] Successfully prepared a docker volume addons-837740
I0915 06:30:16.975263 8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0915 06:30:16.975283 8422 kic.go:194] Starting extracting preloaded images to volume ...
I0915 06:30:16.975349 8422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-837740:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
I0915 06:30:20.683211 8422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-837740:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.707824928s)
I0915 06:30:20.683241 8422 kic.go:203] duration metric: took 3.707955143s to extract preloaded images to volume ...
W0915 06:30:20.683410 8422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0915 06:30:20.683528 8422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0915 06:30:20.739982 8422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-837740 --name addons-837740 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-837740 --network addons-837740 --ip 192.168.49.2 --volume addons-837740:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
I0915 06:30:21.107611 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Running}}
I0915 06:30:21.128498 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:21.157201 8422 cli_runner.go:164] Run: docker exec addons-837740 stat /var/lib/dpkg/alternatives/iptables
I0915 06:30:21.226626 8422 oci.go:144] the created container "addons-837740" has a running status.
I0915 06:30:21.226662 8422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa...
I0915 06:30:22.260322 8422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0915 06:30:22.283288 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:22.299737 8422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0915 06:30:22.299759 8422 kic_runner.go:114] Args: [docker exec --privileged addons-837740 chown docker:docker /home/docker/.ssh/authorized_keys]
I0915 06:30:22.354684 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:22.371829 8422 machine.go:93] provisionDockerMachine start ...
I0915 06:30:22.371916 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:22.389178 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:22.389440 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:22.389450 8422 main.go:141] libmachine: About to run SSH command:
hostname
I0915 06:30:22.525325 8422 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-837740
I0915 06:30:22.525350 8422 ubuntu.go:169] provisioning hostname "addons-837740"
I0915 06:30:22.525415 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:22.543043 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:22.543291 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:22.543317 8422 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-837740 && echo "addons-837740" | sudo tee /etc/hostname
I0915 06:30:22.689867 8422 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-837740
I0915 06:30:22.690040 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:22.712795 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:22.713037 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:22.713060 8422 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-837740' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-837740/g' /etc/hosts;
else
echo '127.0.1.1 addons-837740' | sudo tee -a /etc/hosts;
fi
fi
I0915 06:30:22.850288 8422 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0915 06:30:22.850381 8422 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2359/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2359/.minikube}
I0915 06:30:22.850437 8422 ubuntu.go:177] setting up certificates
I0915 06:30:22.850468 8422 provision.go:84] configureAuth start
I0915 06:30:22.850585 8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
I0915 06:30:22.868830 8422 provision.go:143] copyHostCerts
I0915 06:30:22.868931 8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/ca.pem (1078 bytes)
I0915 06:30:22.869116 8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/cert.pem (1123 bytes)
I0915 06:30:22.869202 8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/key.pem (1675 bytes)
I0915 06:30:22.869278 8422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem org=jenkins.addons-837740 san=[127.0.0.1 192.168.49.2 addons-837740 localhost minikube]
I0915 06:30:23.867372 8422 provision.go:177] copyRemoteCerts
I0915 06:30:23.867442 8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0915 06:30:23.867484 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:23.883832 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:23.979057 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0915 06:30:24.003561 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0915 06:30:24.036101 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0915 06:30:24.061152 8422 provision.go:87] duration metric: took 1.21064733s to configureAuth
I0915 06:30:24.061179 8422 ubuntu.go:193] setting minikube options for container-runtime
I0915 06:30:24.061396 8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:30:24.061458 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:24.080077 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:24.080333 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:24.080349 8422 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0915 06:30:24.218509 8422 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0915 06:30:24.218535 8422 ubuntu.go:71] root file system type: overlay
I0915 06:30:24.218645 8422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0915 06:30:24.218713 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:24.235829 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:24.236074 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:24.236155 8422 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0915 06:30:24.385696 8422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0915 06:30:24.385784 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:24.402347 8422 main.go:141] libmachine: Using SSH client type: native
I0915 06:30:24.402592 8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0915 06:30:24.402618 8422 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0915 06:30:25.204963 8422 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:36.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-15 06:30:24.378891711 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0915 06:30:25.205000 8422 machine.go:96] duration metric: took 2.833152719s to provisionDockerMachine
I0915 06:30:25.205028 8422 client.go:171] duration metric: took 11.865687574s to LocalClient.Create
I0915 06:30:25.205055 8422 start.go:167] duration metric: took 11.865778003s to libmachine.API.Create "addons-837740"
I0915 06:30:25.205068 8422 start.go:293] postStartSetup for "addons-837740" (driver="docker")
I0915 06:30:25.205078 8422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0915 06:30:25.205164 8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0915 06:30:25.205232 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:25.222948 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:25.318714 8422 ssh_runner.go:195] Run: cat /etc/os-release
I0915 06:30:25.322843 8422 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0915 06:30:25.322877 8422 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0915 06:30:25.322887 8422 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0915 06:30:25.322897 8422 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0915 06:30:25.322908 8422 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2359/.minikube/addons for local assets ...
I0915 06:30:25.322976 8422 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2359/.minikube/files for local assets ...
I0915 06:30:25.323000 8422 start.go:296] duration metric: took 117.924264ms for postStartSetup
I0915 06:30:25.323306 8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
I0915 06:30:25.339743 8422 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json ...
I0915 06:30:25.340027 8422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0915 06:30:25.340077 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:25.356976 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:25.451112 8422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0915 06:30:25.455762 8422 start.go:128] duration metric: took 12.119251282s to createHost
I0915 06:30:25.455788 8422 start.go:83] releasing machines lock for "addons-837740", held for 12.119392066s
I0915 06:30:25.455883 8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
I0915 06:30:25.472878 8422 ssh_runner.go:195] Run: cat /version.json
I0915 06:30:25.472934 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:25.473180 8422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0915 06:30:25.473237 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:25.492536 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:25.500015 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:25.585540 8422 ssh_runner.go:195] Run: systemctl --version
I0915 06:30:25.718987 8422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0915 06:30:25.724484 8422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0915 06:30:25.750616 8422 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0915 06:30:25.750739 8422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0915 06:30:25.779289 8422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0915 06:30:25.779314 8422 start.go:495] detecting cgroup driver to use...
I0915 06:30:25.779353 8422 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0915 06:30:25.779452 8422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0915 06:30:25.795977 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0915 06:30:25.805603 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0915 06:30:25.815646 8422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0915 06:30:25.815731 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0915 06:30:25.825642 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0915 06:30:25.835733 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0915 06:30:25.845574 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0915 06:30:25.855412 8422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0915 06:30:25.864314 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0915 06:30:25.874274 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0915 06:30:25.883966 8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0915 06:30:25.893916 8422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0915 06:30:25.902984 8422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0915 06:30:25.911278 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:25.992853 8422 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0915 06:30:26.108450 8422 start.go:495] detecting cgroup driver to use...
I0915 06:30:26.108538 8422 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0915 06:30:26.108625 8422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0915 06:30:26.126937 8422 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0915 06:30:26.127050 8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0915 06:30:26.142151 8422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0915 06:30:26.161819 8422 ssh_runner.go:195] Run: which cri-dockerd
I0915 06:30:26.166732 8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0915 06:30:26.176588 8422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0915 06:30:26.197066 8422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0915 06:30:26.298468 8422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0915 06:30:26.393901 8422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0915 06:30:26.394120 8422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0915 06:30:26.413032 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:26.511583 8422 ssh_runner.go:195] Run: sudo systemctl restart docker
I0915 06:30:26.774737 8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0915 06:30:26.787088 8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0915 06:30:26.798955 8422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0915 06:30:26.884577 8422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0915 06:30:26.983569 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:27.072596 8422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0915 06:30:27.088033 8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0915 06:30:27.099721 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:27.189239 8422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0915 06:30:27.256707 8422 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0915 06:30:27.256860 8422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0915 06:30:27.260580 8422 start.go:563] Will wait 60s for crictl version
I0915 06:30:27.260686 8422 ssh_runner.go:195] Run: which crictl
I0915 06:30:27.264645 8422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0915 06:30:27.303474 8422 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0915 06:30:27.303594 8422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0915 06:30:27.326570 8422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0915 06:30:27.353452 8422 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0915 06:30:27.353563 8422 cli_runner.go:164] Run: docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:30:27.369313 8422 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0915 06:30:27.372894 8422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 06:30:27.383894 8422 kubeadm.go:883] updating cluster {Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0915 06:30:27.384015 8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0915 06:30:27.384078 8422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 06:30:27.402434 8422 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0915 06:30:27.402457 8422 docker.go:615] Images already preloaded, skipping extraction
I0915 06:30:27.402526 8422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0915 06:30:27.418663 8422 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0915 06:30:27.418687 8422 cache_images.go:84] Images are preloaded, skipping loading
I0915 06:30:27.418697 8422 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0915 06:30:27.418793 8422 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-837740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0915 06:30:27.418860 8422 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0915 06:30:27.464154 8422 cni.go:84] Creating CNI manager for ""
I0915 06:30:27.464182 8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0915 06:30:27.464192 8422 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0915 06:30:27.464218 8422 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-837740 NodeName:addons-837740 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0915 06:30:27.464359 8422 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-837740"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0915 06:30:27.464461 8422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0915 06:30:27.473282 8422 binaries.go:44] Found k8s binaries, skipping transfer
I0915 06:30:27.473353 8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0915 06:30:27.482067 8422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0915 06:30:27.499918 8422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0915 06:30:27.517511 8422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0915 06:30:27.535263 8422 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0915 06:30:27.538536 8422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0915 06:30:27.549202 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:27.634233 8422 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0915 06:30:27.649209 8422 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740 for IP: 192.168.49.2
I0915 06:30:27.649228 8422 certs.go:194] generating shared ca certs ...
I0915 06:30:27.649244 8422 certs.go:226] acquiring lock for ca certs: {Name:mk13c71d6895f2d850a77bc195b18d377b1ebab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:27.649371 8422 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key
I0915 06:30:27.908855 8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt ...
I0915 06:30:27.908884 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt: {Name:mk3b0689801412b44fa166e8fdbf24d56dce9b53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:27.909112 8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key ...
I0915 06:30:27.909128 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key: {Name:mk83d56d5dc3987cdf10455f164b84411abafa05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:27.909242 8422 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key
I0915 06:30:28.687516 8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt ...
I0915 06:30:28.687549 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt: {Name:mk55025023dfb8fd9a7f55d023f6c0ea9adcc0b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:28.687735 8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key ...
I0915 06:30:28.687748 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key: {Name:mk123f0d53fa1bac4f2d6191863a97da19cc0845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:28.687826 8422 certs.go:256] generating profile certs ...
I0915 06:30:28.687883 8422 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key
I0915 06:30:28.687894 8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt with IP's: []
I0915 06:30:28.839851 8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt ...
I0915 06:30:28.839886 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: {Name:mke9ce8ea39d7af3cb4d7a78a390c92cbe920c41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:28.840083 8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key ...
I0915 06:30:28.840096 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key: {Name:mk6dc285a1be0c8296b45a1eeeed6c7936967204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:28.840173 8422 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868
I0915 06:30:28.840198 8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0915 06:30:30.217736 8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 ...
I0915 06:30:30.217776 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868: {Name:mk41712f3624b73d5ebed9a84d068bbcb9634185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:30.218012 8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868 ...
I0915 06:30:30.218031 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868: {Name:mk58d2a8cf2b714c2c289d85d02b81730638e260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:30.218129 8422 certs.go:381] copying /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 -> /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt
I0915 06:30:30.218215 8422 certs.go:385] copying /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868 -> /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key
I0915 06:30:30.218277 8422 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key
I0915 06:30:30.218299 8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt with IP's: []
I0915 06:30:30.639685 8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt ...
I0915 06:30:30.639716 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt: {Name:mkf6acb1dccda4a096cbf1dfcd5f2db6356b76e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:30.639901 8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key ...
I0915 06:30:30.639915 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key: {Name:mk6cdca20081c5d4d5edca310f6cda8439b596f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:30.640103 8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem (1679 bytes)
I0915 06:30:30.640146 8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem (1078 bytes)
I0915 06:30:30.640179 8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem (1123 bytes)
I0915 06:30:30.640207 8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem (1675 bytes)
I0915 06:30:30.640777 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0915 06:30:30.665699 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0915 06:30:30.689349 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0915 06:30:30.712479 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0915 06:30:30.737351 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0915 06:30:30.761568 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0915 06:30:30.787576 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0915 06:30:30.814173 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0915 06:30:30.838428 8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0915 06:30:30.863319 8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0915 06:30:30.881545 8422 ssh_runner.go:195] Run: openssl version
I0915 06:30:30.886907 8422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0915 06:30:30.896324 8422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0915 06:30:30.899578 8422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:30 /usr/share/ca-certificates/minikubeCA.pem
I0915 06:30:30.899640 8422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0915 06:30:30.906307 8422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0915 06:30:30.915634 8422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0915 06:30:30.918842 8422 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0915 06:30:30.918913 8422 kubeadm.go:392] StartCluster: {Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0915 06:30:30.919050 8422 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0915 06:30:30.935500 8422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0915 06:30:30.944221 8422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0915 06:30:30.952893 8422 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0915 06:30:30.952984 8422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0915 06:30:30.961657 8422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0915 06:30:30.961716 8422 kubeadm.go:157] found existing configuration files:
I0915 06:30:30.961779 8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0915 06:30:30.970499 8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0915 06:30:30.970583 8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0915 06:30:30.978660 8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0915 06:30:30.987297 8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0915 06:30:30.987380 8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0915 06:30:30.995730 8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0915 06:30:31.004227 8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0915 06:30:31.004329 8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0915 06:30:31.017031 8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0915 06:30:31.025777 8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0915 06:30:31.025851 8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0915 06:30:31.034902 8422 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0915 06:30:31.079133 8422 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0915 06:30:31.079259 8422 kubeadm.go:310] [preflight] Running pre-flight checks
I0915 06:30:31.102260 8422 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0915 06:30:31.102344 8422 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1069-aws[0m
I0915 06:30:31.102385 8422 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0915 06:30:31.102434 8422 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0915 06:30:31.102486 8422 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0915 06:30:31.102538 8422 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0915 06:30:31.102589 8422 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0915 06:30:31.102641 8422 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0915 06:30:31.102692 8422 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0915 06:30:31.102740 8422 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0915 06:30:31.102792 8422 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0915 06:30:31.102849 8422 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0915 06:30:31.171670 8422 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0915 06:30:31.171782 8422 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0915 06:30:31.171876 8422 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0915 06:30:31.183792 8422 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0915 06:30:31.188898 8422 out.go:235] - Generating certificates and keys ...
I0915 06:30:31.189049 8422 kubeadm.go:310] [certs] Using existing ca certificate authority
I0915 06:30:31.189144 8422 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0915 06:30:31.664598 8422 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0915 06:30:32.017260 8422 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0915 06:30:32.482571 8422 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0915 06:30:33.022487 8422 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0915 06:30:33.515113 8422 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0915 06:30:33.515555 8422 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-837740 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0915 06:30:34.106957 8422 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0915 06:30:34.107267 8422 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-837740 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0915 06:30:34.392803 8422 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0915 06:30:34.975737 8422 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0915 06:30:36.016010 8422 kubeadm.go:310] [certs] Generating "sa" key and public key
I0915 06:30:36.016095 8422 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0915 06:30:36.506979 8422 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0915 06:30:36.857933 8422 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0915 06:30:37.487161 8422 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0915 06:30:37.844560 8422 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0915 06:30:38.021068 8422 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0915 06:30:38.022106 8422 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0915 06:30:38.025430 8422 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0915 06:30:38.027752 8422 out.go:235] - Booting up control plane ...
I0915 06:30:38.027870 8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0915 06:30:38.027957 8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0915 06:30:38.028967 8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0915 06:30:38.040889 8422 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0915 06:30:38.047588 8422 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0915 06:30:38.047844 8422 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0915 06:30:38.167582 8422 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0915 06:30:38.167707 8422 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0915 06:30:39.669071 8422 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50144258s
I0915 06:30:39.669167 8422 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0915 06:30:47.170334 8422 kubeadm.go:310] [api-check] The API server is healthy after 7.501385072s
I0915 06:30:47.191798 8422 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0915 06:30:47.205945 8422 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0915 06:30:47.230922 8422 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0915 06:30:47.231130 8422 kubeadm.go:310] [mark-control-plane] Marking the node addons-837740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0915 06:30:47.240826 8422 kubeadm.go:310] [bootstrap-token] Using token: brjfs8.a4kwxi7fgc9yosoz
I0915 06:30:47.242826 8422 out.go:235] - Configuring RBAC rules ...
I0915 06:30:47.243035 8422 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0915 06:30:47.248309 8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0915 06:30:47.255766 8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0915 06:30:47.259379 8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0915 06:30:47.264813 8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0915 06:30:47.270028 8422 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0915 06:30:47.577370 8422 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0915 06:30:48.003173 8422 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0915 06:30:48.577134 8422 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0915 06:30:48.578350 8422 kubeadm.go:310]
I0915 06:30:48.578420 8422 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0915 06:30:48.578426 8422 kubeadm.go:310]
I0915 06:30:48.578502 8422 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0915 06:30:48.578506 8422 kubeadm.go:310]
I0915 06:30:48.578531 8422 kubeadm.go:310] mkdir -p $HOME/.kube
I0915 06:30:48.578590 8422 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0915 06:30:48.578639 8422 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0915 06:30:48.578644 8422 kubeadm.go:310]
I0915 06:30:48.578697 8422 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0915 06:30:48.578701 8422 kubeadm.go:310]
I0915 06:30:48.578748 8422 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0915 06:30:48.578753 8422 kubeadm.go:310]
I0915 06:30:48.578804 8422 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0915 06:30:48.578886 8422 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0915 06:30:48.578953 8422 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0915 06:30:48.578958 8422 kubeadm.go:310]
I0915 06:30:48.579040 8422 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0915 06:30:48.579115 8422 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0915 06:30:48.579119 8422 kubeadm.go:310]
I0915 06:30:48.579202 8422 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token brjfs8.a4kwxi7fgc9yosoz \
I0915 06:30:48.579303 8422 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:f554695279f590146a1d6e30dd969f83b0e60351f554476a16c563429bd9a62b \
I0915 06:30:48.579325 8422 kubeadm.go:310] --control-plane
I0915 06:30:48.579330 8422 kubeadm.go:310]
I0915 06:30:48.579419 8422 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0915 06:30:48.579424 8422 kubeadm.go:310]
I0915 06:30:48.579504 8422 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token brjfs8.a4kwxi7fgc9yosoz \
I0915 06:30:48.579604 8422 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:f554695279f590146a1d6e30dd969f83b0e60351f554476a16c563429bd9a62b
I0915 06:30:48.582143 8422 kubeadm.go:310] W0915 06:30:31.074967 1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0915 06:30:48.582457 8422 kubeadm.go:310] W0915 06:30:31.076167 1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0915 06:30:48.582670 8422 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
I0915 06:30:48.582791 8422 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0915 06:30:48.582812 8422 cni.go:84] Creating CNI manager for ""
I0915 06:30:48.582827 8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0915 06:30:48.586060 8422 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0915 06:30:48.587791 8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0915 06:30:48.596658 8422 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0915 06:30:48.615587 8422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0915 06:30:48.615704 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:48.615770 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-837740 minikube.k8s.io/updated_at=2024_09_15T06_30_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-837740 minikube.k8s.io/primary=true
I0915 06:30:48.753454 8422 ops.go:34] apiserver oom_adj: -16
I0915 06:30:48.753563 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:49.253920 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:49.754515 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:50.253661 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:50.754442 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:51.254642 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:51.753693 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:52.254258 8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0915 06:30:52.341833 8422 kubeadm.go:1113] duration metric: took 3.726172194s to wait for elevateKubeSystemPrivileges
I0915 06:30:52.341864 8422 kubeadm.go:394] duration metric: took 21.422979376s to StartCluster
I0915 06:30:52.341882 8422 settings.go:142] acquiring lock: {Name:mk8198f125c4123ce66d3a387e925294953ccbbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:52.342030 8422 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19644-2359/kubeconfig
I0915 06:30:52.342393 8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/kubeconfig: {Name:mk02932df8d8a4c1b90f61568583a2b22575293e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 06:30:52.342603 8422 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0915 06:30:52.342705 8422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0915 06:30:52.342935 8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:30:52.342967 8422 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0915 06:30:52.343046 8422 addons.go:69] Setting yakd=true in profile "addons-837740"
I0915 06:30:52.343063 8422 addons.go:234] Setting addon yakd=true in "addons-837740"
I0915 06:30:52.343085 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.343591 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.344027 8422 addons.go:69] Setting inspektor-gadget=true in profile "addons-837740"
I0915 06:30:52.344050 8422 addons.go:234] Setting addon inspektor-gadget=true in "addons-837740"
I0915 06:30:52.344074 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.344477 8422 out.go:177] * Verifying Kubernetes components...
I0915 06:30:52.344719 8422 addons.go:69] Setting cloud-spanner=true in profile "addons-837740"
I0915 06:30:52.344743 8422 addons.go:234] Setting addon cloud-spanner=true in "addons-837740"
I0915 06:30:52.344770 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.344772 8422 addons.go:69] Setting metrics-server=true in profile "addons-837740"
I0915 06:30:52.344822 8422 addons.go:234] Setting addon metrics-server=true in "addons-837740"
I0915 06:30:52.344861 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.345171 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.345512 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.345894 8422 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-837740"
I0915 06:30:52.345936 8422 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-837740"
I0915 06:30:52.345967 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.346481 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.349269 8422 addons.go:69] Setting default-storageclass=true in profile "addons-837740"
I0915 06:30:52.349297 8422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-837740"
I0915 06:30:52.349643 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.351831 8422 addons.go:69] Setting gcp-auth=true in profile "addons-837740"
I0915 06:30:52.351867 8422 mustload.go:65] Loading cluster: addons-837740
I0915 06:30:52.352187 8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:30:52.359680 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.364904 8422 addons.go:69] Setting ingress=true in profile "addons-837740"
I0915 06:30:52.364993 8422 addons.go:234] Setting addon ingress=true in "addons-837740"
I0915 06:30:52.365073 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.365820 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.372443 8422 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-837740"
I0915 06:30:52.372484 8422 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-837740"
I0915 06:30:52.372520 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.373000 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.387743 8422 addons.go:69] Setting registry=true in profile "addons-837740"
I0915 06:30:52.387788 8422 addons.go:234] Setting addon registry=true in "addons-837740"
I0915 06:30:52.387825 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.388299 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.396102 8422 addons.go:69] Setting ingress-dns=true in profile "addons-837740"
I0915 06:30:52.396174 8422 addons.go:234] Setting addon ingress-dns=true in "addons-837740"
I0915 06:30:52.398247 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.398806 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.425321 8422 addons.go:69] Setting storage-provisioner=true in profile "addons-837740"
I0915 06:30:52.425362 8422 addons.go:234] Setting addon storage-provisioner=true in "addons-837740"
I0915 06:30:52.425402 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.425475 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.425856 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.450294 8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0915 06:30:52.476287 8422 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-837740"
I0915 06:30:52.476323 8422 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-837740"
I0915 06:30:52.476676 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.486281 8422 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0915 06:30:52.491489 8422 addons.go:69] Setting volcano=true in profile "addons-837740"
I0915 06:30:52.491523 8422 addons.go:234] Setting addon volcano=true in "addons-837740"
I0915 06:30:52.491559 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.492025 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.524840 8422 addons.go:69] Setting volumesnapshots=true in profile "addons-837740"
I0915 06:30:52.529213 8422 addons.go:234] Setting addon volumesnapshots=true in "addons-837740"
I0915 06:30:52.529289 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.570168 8422 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0915 06:30:52.570241 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0915 06:30:52.570335 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.594094 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.598962 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0915 06:30:52.617832 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.628331 8422 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0915 06:30:52.628450 8422 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0915 06:30:52.630038 8422 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0915 06:30:52.630062 8422 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0915 06:30:52.630127 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.632074 8422 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0915 06:30:52.637435 8422 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0915 06:30:52.639751 8422 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0915 06:30:52.639773 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0915 06:30:52.639912 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.644727 8422 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0915 06:30:52.647582 8422 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0915 06:30:52.647652 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0915 06:30:52.647748 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.617921 8422 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0915 06:30:52.667783 8422 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0915 06:30:52.667809 8422 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0915 06:30:52.667881 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.625218 8422 addons.go:234] Setting addon default-storageclass=true in "addons-837740"
I0915 06:30:52.674533 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.675001 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.678849 8422 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0915 06:30:52.680867 8422 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0915 06:30:52.680890 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0915 06:30:52.680954 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.696243 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0915 06:30:52.700473 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0915 06:30:52.702625 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0915 06:30:52.707019 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0915 06:30:52.711432 8422 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0915 06:30:52.713401 8422 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0915 06:30:52.717753 8422 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0915 06:30:52.718340 8422 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0915 06:30:52.717764 8422 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0915 06:30:52.719877 8422 out.go:177] - Using image docker.io/registry:2.8.3
I0915 06:30:52.719315 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0915 06:30:52.719565 8422 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-837740"
I0915 06:30:52.734995 8422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0915 06:30:52.735014 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0915 06:30:52.735077 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.742336 8422 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0915 06:30:52.742416 8422 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0915 06:30:52.742521 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.754703 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0915 06:30:52.754784 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.755837 8422 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0915 06:30:52.757970 8422 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0915 06:30:52.759882 8422 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0915 06:30:52.767275 8422 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0915 06:30:52.767440 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0915 06:30:52.772811 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.778356 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:30:52.778825 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:30:52.798088 8422 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0915 06:30:52.798372 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.799293 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.800254 8422 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0915 06:30:52.800271 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0915 06:30:52.800329 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.834245 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.834963 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.848635 8422 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0915 06:30:52.855724 8422 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0915 06:30:52.855746 8422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0915 06:30:52.855803 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.857628 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0915 06:30:52.857692 8422 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0915 06:30:52.857777 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:52.875682 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.903216 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.940106 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.944234 8422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0915 06:30:52.950257 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.955760 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.972182 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.972829 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.977571 8422 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0915 06:30:52.985021 8422 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0915 06:30:52.987363 8422 out.go:177] - Using image docker.io/busybox:stable
I0915 06:30:52.987767 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:52.989967 8422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0915 06:30:52.990060 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0915 06:30:52.990126 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
W0915 06:30:53.012274 8422 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0915 06:30:53.012316 8422 retry.go:31] will retry after 330.561655ms: ssh: handshake failed: EOF
I0915 06:30:53.025820 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:53.034226 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:30:53.469649 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0915 06:30:53.579647 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0915 06:30:53.587309 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0915 06:30:53.658478 8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0915 06:30:53.658506 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0915 06:30:53.677491 8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0915 06:30:53.677528 8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0915 06:30:53.684567 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0915 06:30:53.741538 8422 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0915 06:30:53.741562 8422 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0915 06:30:53.759773 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0915 06:30:53.778582 8422 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0915 06:30:53.778608 8422 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0915 06:30:53.805896 8422 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0915 06:30:53.805922 8422 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0915 06:30:53.821559 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0915 06:30:53.947952 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0915 06:30:53.949436 8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0915 06:30:53.949467 8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0915 06:30:54.070300 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0915 06:30:54.070324 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0915 06:30:54.156683 8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0915 06:30:54.156724 8422 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0915 06:30:54.180720 8422 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0915 06:30:54.180747 8422 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0915 06:30:54.299118 8422 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0915 06:30:54.299144 8422 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0915 06:30:54.391059 8422 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0915 06:30:54.391083 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0915 06:30:54.590282 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0915 06:30:54.741747 8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0915 06:30:54.741774 8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0915 06:30:54.763220 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0915 06:30:54.763246 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0915 06:30:54.781434 8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0915 06:30:54.781459 8422 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0915 06:30:54.791889 8422 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0915 06:30:54.791914 8422 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0915 06:30:54.811362 8422 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0915 06:30:54.811390 8422 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0915 06:30:54.859134 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0915 06:30:54.933013 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0915 06:30:54.933041 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0915 06:30:54.972716 8422 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0915 06:30:54.972739 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0915 06:30:55.024128 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0915 06:30:55.024175 8422 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0915 06:30:55.035127 8422 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0915 06:30:55.035157 8422 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0915 06:30:55.055284 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0915 06:30:55.162891 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0915 06:30:55.238528 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0915 06:30:55.238560 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0915 06:30:55.271807 8422 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 06:30:55.271839 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0915 06:30:55.344045 8422 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0915 06:30:55.344076 8422 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0915 06:30:55.516101 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 06:30:55.611415 8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0915 06:30:55.611461 8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0915 06:30:55.650155 8422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.705891344s)
I0915 06:30:55.650275 8422 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0915 06:30:55.650207 8422 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.672615235s)
I0915 06:30:55.651130 8422 node_ready.go:35] waiting up to 6m0s for node "addons-837740" to be "Ready" ...
I0915 06:30:55.650229 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.180557832s)
I0915 06:30:55.657403 8422 node_ready.go:49] node "addons-837740" has status "Ready":"True"
I0915 06:30:55.657433 8422 node_ready.go:38] duration metric: took 6.279416ms for node "addons-837740" to be "Ready" ...
I0915 06:30:55.657442 8422 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 06:30:55.677900 8422 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace to be "Ready" ...
I0915 06:30:55.862100 8422 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0915 06:30:55.862131 8422 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0915 06:30:55.867416 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0915 06:30:55.867455 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0915 06:30:56.155045 8422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-837740" context rescaled to 1 replicas
I0915 06:30:56.221537 8422 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0915 06:30:56.221568 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0915 06:30:56.221856 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0915 06:30:56.221882 8422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0915 06:30:56.577111 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0915 06:30:56.577187 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0915 06:30:56.600663 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0915 06:30:56.710538 8422 pod_ready.go:93] pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:56.710567 8422 pod_ready.go:82] duration metric: took 1.032631894s for pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace to be "Ready" ...
I0915 06:30:56.710579 8422 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace to be "Ready" ...
I0915 06:30:56.873960 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.294275971s)
I0915 06:30:56.874142 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.286806951s)
I0915 06:30:57.116832 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0915 06:30:57.116894 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0915 06:30:57.732369 8422 pod_ready.go:93] pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:57.732447 8422 pod_ready.go:82] duration metric: took 1.021860536s for pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.732473 8422 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.762630 8422 pod_ready.go:93] pod "etcd-addons-837740" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:57.762695 8422 pod_ready.go:82] duration metric: took 30.200526ms for pod "etcd-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.762720 8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.791709 8422 pod_ready.go:93] pod "kube-apiserver-addons-837740" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:57.791784 8422 pod_ready.go:82] duration metric: took 29.044563ms for pod "kube-apiserver-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.791811 8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.807110 8422 pod_ready.go:93] pod "kube-controller-manager-addons-837740" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:57.807175 8422 pod_ready.go:82] duration metric: took 15.344201ms for pod "kube-controller-manager-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:57.807201 8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjdxv" in "kube-system" namespace to be "Ready" ...
I0915 06:30:58.054482 8422 pod_ready.go:93] pod "kube-proxy-vjdxv" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:58.054579 8422 pod_ready.go:82] duration metric: took 247.356077ms for pod "kube-proxy-vjdxv" in "kube-system" namespace to be "Ready" ...
I0915 06:30:58.054611 8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:58.100627 8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0915 06:30:58.100694 8422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0915 06:30:58.122052 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.437447888s)
I0915 06:30:58.455228 8422 pod_ready.go:93] pod "kube-scheduler-addons-837740" in "kube-system" namespace has status "Ready":"True"
I0915 06:30:58.455295 8422 pod_ready.go:82] duration metric: took 400.645849ms for pod "kube-scheduler-addons-837740" in "kube-system" namespace to be "Ready" ...
I0915 06:30:58.455329 8422 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace to be "Ready" ...
I0915 06:30:58.636800 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0915 06:30:59.743893 8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0915 06:30:59.744019 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:30:59.772013 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:31:00.498138 8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
I0915 06:31:00.839311 8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0915 06:31:01.144376 8422 addons.go:234] Setting addon gcp-auth=true in "addons-837740"
I0915 06:31:01.144483 8422 host.go:66] Checking if "addons-837740" exists ...
I0915 06:31:01.144999 8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
I0915 06:31:01.171298 8422 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0915 06:31:01.171398 8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
I0915 06:31:01.195101 8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
I0915 06:31:02.962472 8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
I0915 06:31:03.243814 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.422220417s)
I0915 06:31:03.243880 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.484083316s)
I0915 06:31:03.243897 8422 addons.go:475] Verifying addon ingress=true in "addons-837740"
I0915 06:31:03.246172 8422 out.go:177] * Verifying ingress addon...
I0915 06:31:03.249453 8422 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0915 06:31:03.253659 8422 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0915 06:31:03.253689 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:03.754543 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:04.310699 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:04.785972 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:04.979597 8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
I0915 06:31:05.303688 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:05.381688 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.433701251s)
I0915 06:31:05.381793 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.791485476s)
I0915 06:31:05.382043 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.522881759s)
I0915 06:31:05.382084 8422 addons.go:475] Verifying addon registry=true in "addons-837740"
I0915 06:31:05.382331 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.327014308s)
I0915 06:31:05.382368 8422 addons.go:475] Verifying addon metrics-server=true in "addons-837740"
I0915 06:31:05.382448 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.219494557s)
I0915 06:31:05.382710 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.866576126s)
W0915 06:31:05.382737 8422 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0915 06:31:05.382752 8422 retry.go:31] will retry after 292.718465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0915 06:31:05.382815 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.782083545s)
I0915 06:31:05.385111 8422 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-837740 service yakd-dashboard -n yakd-dashboard
I0915 06:31:05.385121 8422 out.go:177] * Verifying registry addon...
I0915 06:31:05.388627 8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0915 06:31:05.452051 8422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0915 06:31:05.452074 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:05.676083 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0915 06:31:05.794222 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:05.894401 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:06.257441 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:06.371074 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.734155373s)
I0915 06:31:06.371245 8422 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-837740"
I0915 06:31:06.371196 8422 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.199838625s)
I0915 06:31:06.373677 8422 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0915 06:31:06.373762 8422 out.go:177] * Verifying csi-hostpath-driver addon...
I0915 06:31:06.376746 8422 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0915 06:31:06.377481 8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0915 06:31:06.379377 8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0915 06:31:06.379439 8422 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0915 06:31:06.388908 8422 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0915 06:31:06.388984 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:06.392286 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:06.445609 8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0915 06:31:06.445688 8422 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0915 06:31:06.487848 8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0915 06:31:06.487919 8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0915 06:31:06.571993 8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0915 06:31:06.753824 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:06.884720 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:06.892459 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:07.254674 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:07.383169 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:07.399810 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:07.495117 8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
I0915 06:31:07.754760 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:07.882386 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:07.985233 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:08.092864 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.416683008s)
I0915 06:31:08.093017 8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.520952208s)
I0915 06:31:08.095982 8422 addons.go:475] Verifying addon gcp-auth=true in "addons-837740"
I0915 06:31:08.098652 8422 out.go:177] * Verifying gcp-auth addon...
I0915 06:31:08.101212 8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0915 06:31:08.104811 8422 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0915 06:31:08.254502 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:08.383216 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:08.393007 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:08.754922 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:08.882436 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:08.892712 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:09.254472 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:09.382834 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:09.393182 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:09.761232 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:09.882704 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:09.892757 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:09.962949 8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
I0915 06:31:10.255429 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:10.382149 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:10.392450 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:10.754331 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:10.883315 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:10.892098 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:11.256370 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:11.382995 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:11.393047 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:11.468274 8422 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"True"
I0915 06:31:11.468351 8422 pod_ready.go:82] duration metric: took 13.012999524s for pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace to be "Ready" ...
I0915 06:31:11.468377 8422 pod_ready.go:39] duration metric: took 15.810922224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 06:31:11.468423 8422 api_server.go:52] waiting for apiserver process to appear ...
I0915 06:31:11.468520 8422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 06:31:11.486103 8422 api_server.go:72] duration metric: took 19.143465933s to wait for apiserver process to appear ...
I0915 06:31:11.486131 8422 api_server.go:88] waiting for apiserver healthz status ...
I0915 06:31:11.486153 8422 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0915 06:31:11.494253 8422 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0915 06:31:11.495595 8422 api_server.go:141] control plane version: v1.31.1
I0915 06:31:11.495626 8422 api_server.go:131] duration metric: took 9.487022ms to wait for apiserver health ...
I0915 06:31:11.495635 8422 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 06:31:11.504819 8422 system_pods.go:59] 17 kube-system pods found
I0915 06:31:11.504857 8422 system_pods.go:61] "coredns-7c65d6cfc9-wglrg" [b6844185-6d57-460b-bedc-75eb27fab2b2] Running
I0915 06:31:11.504870 8422 system_pods.go:61] "csi-hostpath-attacher-0" [4259dd24-69b8-4f9a-b344-93e221d119f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0915 06:31:11.504879 8422 system_pods.go:61] "csi-hostpath-resizer-0" [f7ab10d0-07f7-49fe-94e9-83b4b658c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0915 06:31:11.504890 8422 system_pods.go:61] "csi-hostpathplugin-m2zjj" [6897b926-699d-4e69-858b-dfb3b5ae22a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0915 06:31:11.504901 8422 system_pods.go:61] "etcd-addons-837740" [54093b45-848e-42fb-9d63-0326870285f2] Running
I0915 06:31:11.504906 8422 system_pods.go:61] "kube-apiserver-addons-837740" [955b748d-a741-45cf-9d92-dff6d388b528] Running
I0915 06:31:11.504914 8422 system_pods.go:61] "kube-controller-manager-addons-837740" [4b8af77d-cf8c-4f57-9308-bb8e3f97ead7] Running
I0915 06:31:11.504921 8422 system_pods.go:61] "kube-ingress-dns-minikube" [226b1200-80f2-453e-910a-99218aad1e1d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0915 06:31:11.504928 8422 system_pods.go:61] "kube-proxy-vjdxv" [e1764451-6cbe-4223-b73f-5a1621e02c92] Running
I0915 06:31:11.504933 8422 system_pods.go:61] "kube-scheduler-addons-837740" [0d56d017-af93-4829-b0b4-34fa2a27834a] Running
I0915 06:31:11.504939 8422 system_pods.go:61] "metrics-server-84c5f94fbc-bgbxc" [f10bfbc8-7858-4a49-9947-c358eaefb7b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0915 06:31:11.504945 8422 system_pods.go:61] "nvidia-device-plugin-daemonset-tt4ct" [201ece5f-7d16-40c8-b54a-2afc0f9b1595] Running
I0915 06:31:11.504951 8422 system_pods.go:61] "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0915 06:31:11.504957 8422 system_pods.go:61] "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0915 06:31:11.504971 8422 system_pods.go:61] "snapshot-controller-56fcc65765-2rhl5" [a143ff8a-8d41-45ea-82b5-9097104ed247] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 06:31:11.504978 8422 system_pods.go:61] "snapshot-controller-56fcc65765-pbftt" [f2ca4d13-a7b0-41a7-a845-13c6e7c1e7ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 06:31:11.504982 8422 system_pods.go:61] "storage-provisioner" [651c275e-abb9-49f8-b7b5-d3928708b097] Running
I0915 06:31:11.504992 8422 system_pods.go:74] duration metric: took 9.350022ms to wait for pod list to return data ...
I0915 06:31:11.505003 8422 default_sa.go:34] waiting for default service account to be created ...
I0915 06:31:11.507963 8422 default_sa.go:45] found service account: "default"
I0915 06:31:11.507992 8422 default_sa.go:55] duration metric: took 2.982769ms for default service account to be created ...
I0915 06:31:11.508001 8422 system_pods.go:116] waiting for k8s-apps to be running ...
I0915 06:31:11.518544 8422 system_pods.go:86] 17 kube-system pods found
I0915 06:31:11.518625 8422 system_pods.go:89] "coredns-7c65d6cfc9-wglrg" [b6844185-6d57-460b-bedc-75eb27fab2b2] Running
I0915 06:31:11.518650 8422 system_pods.go:89] "csi-hostpath-attacher-0" [4259dd24-69b8-4f9a-b344-93e221d119f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0915 06:31:11.518697 8422 system_pods.go:89] "csi-hostpath-resizer-0" [f7ab10d0-07f7-49fe-94e9-83b4b658c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0915 06:31:11.518726 8422 system_pods.go:89] "csi-hostpathplugin-m2zjj" [6897b926-699d-4e69-858b-dfb3b5ae22a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0915 06:31:11.518747 8422 system_pods.go:89] "etcd-addons-837740" [54093b45-848e-42fb-9d63-0326870285f2] Running
I0915 06:31:11.518781 8422 system_pods.go:89] "kube-apiserver-addons-837740" [955b748d-a741-45cf-9d92-dff6d388b528] Running
I0915 06:31:11.518805 8422 system_pods.go:89] "kube-controller-manager-addons-837740" [4b8af77d-cf8c-4f57-9308-bb8e3f97ead7] Running
I0915 06:31:11.518829 8422 system_pods.go:89] "kube-ingress-dns-minikube" [226b1200-80f2-453e-910a-99218aad1e1d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0915 06:31:11.518860 8422 system_pods.go:89] "kube-proxy-vjdxv" [e1764451-6cbe-4223-b73f-5a1621e02c92] Running
I0915 06:31:11.518884 8422 system_pods.go:89] "kube-scheduler-addons-837740" [0d56d017-af93-4829-b0b4-34fa2a27834a] Running
I0915 06:31:11.518906 8422 system_pods.go:89] "metrics-server-84c5f94fbc-bgbxc" [f10bfbc8-7858-4a49-9947-c358eaefb7b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0915 06:31:11.518951 8422 system_pods.go:89] "nvidia-device-plugin-daemonset-tt4ct" [201ece5f-7d16-40c8-b54a-2afc0f9b1595] Running
I0915 06:31:11.518978 8422 system_pods.go:89] "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0915 06:31:11.519001 8422 system_pods.go:89] "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0915 06:31:11.519039 8422 system_pods.go:89] "snapshot-controller-56fcc65765-2rhl5" [a143ff8a-8d41-45ea-82b5-9097104ed247] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 06:31:11.519061 8422 system_pods.go:89] "snapshot-controller-56fcc65765-pbftt" [f2ca4d13-a7b0-41a7-a845-13c6e7c1e7ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0915 06:31:11.519079 8422 system_pods.go:89] "storage-provisioner" [651c275e-abb9-49f8-b7b5-d3928708b097] Running
I0915 06:31:11.519117 8422 system_pods.go:126] duration metric: took 11.107413ms to wait for k8s-apps to be running ...
I0915 06:31:11.519139 8422 system_svc.go:44] waiting for kubelet service to be running ....
I0915 06:31:11.519232 8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0915 06:31:11.534655 8422 system_svc.go:56] duration metric: took 15.497771ms WaitForService to wait for kubelet
I0915 06:31:11.534732 8422 kubeadm.go:582] duration metric: took 19.19209902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0915 06:31:11.534781 8422 node_conditions.go:102] verifying NodePressure condition ...
I0915 06:31:11.538526 8422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0915 06:31:11.538608 8422 node_conditions.go:123] node cpu capacity is 2
I0915 06:31:11.538633 8422 node_conditions.go:105] duration metric: took 3.812258ms to run NodePressure ...
I0915 06:31:11.538657 8422 start.go:241] waiting for startup goroutines ...
I0915 06:31:11.757093 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:11.885155 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:11.893664 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:12.255360 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:12.382863 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:12.393185 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:12.754325 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:12.881788 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:12.892037 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:13.254545 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:13.382845 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:13.392737 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:13.753725 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:13.882241 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:13.892028 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:14.254320 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:14.386265 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:14.392972 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:14.753500 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:14.882807 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:14.892120 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:15.254199 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:15.382950 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:15.392494 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:15.754020 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:15.882662 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:15.892296 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:16.254888 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:16.382418 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:16.393250 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:16.754039 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:16.882241 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:16.892621 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:17.254624 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:17.382485 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:17.393310 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:17.755257 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:17.883602 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:17.893211 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:18.259780 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:18.383880 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:18.393748 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:18.754743 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:18.882696 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:18.892540 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:19.253772 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:19.383228 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:19.393074 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:19.753792 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:19.883440 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:19.892913 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:20.255588 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:20.382877 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:20.393263 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:20.754949 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:20.883109 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:20.892431 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:21.253676 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:21.383155 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:21.392403 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:21.754441 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:21.881779 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:21.892685 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:22.254178 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:22.386931 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:22.392753 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:22.754101 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:22.882483 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:22.893343 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:23.254567 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:23.382710 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:23.392946 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:23.753900 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:23.882587 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:23.892235 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:24.253912 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:24.382640 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:24.392170 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:24.754461 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:24.883245 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:24.892365 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:25.254512 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:25.383120 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:25.392799 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:25.754325 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:25.882976 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:25.893193 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:26.254244 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:26.382872 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:26.392943 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:26.754588 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:26.884131 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:26.892714 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:27.256212 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:27.383744 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:27.392991 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0915 06:31:27.754117 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:27.883430 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:27.894442 8422 kapi.go:107] duration metric: took 22.505814726s to wait for kubernetes.io/minikube-addons=registry ...
I0915 06:31:28.254392 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:28.382953 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:28.754211 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:28.882150 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:29.254186 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:29.382438 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:29.754447 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:29.882147 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:30.257168 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:30.382948 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:30.754165 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:30.882885 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:31.254556 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:31.382398 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:31.754056 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:31.883308 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:32.254638 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:32.383492 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:32.755487 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:32.884106 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:33.254838 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:33.385272 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:33.754789 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:33.882914 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:34.254488 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:34.383030 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:34.754517 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:34.882289 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:35.261128 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:35.382595 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:35.755502 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:35.882203 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:36.254159 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:36.391613 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:36.754866 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:36.885846 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:37.254761 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:37.383119 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:37.754739 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:37.882657 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:38.253781 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:38.382800 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:38.754574 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:38.882536 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:39.254937 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:39.382914 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:39.760491 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:39.884782 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:40.253844 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:40.382741 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:40.757903 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:40.888293 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:41.255242 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:41.383399 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:41.754387 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:41.882927 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:42.255922 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:42.383652 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:42.754662 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:42.883036 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:43.254252 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:43.383196 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:43.753750 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:43.882467 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:44.255434 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:44.383311 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:44.754587 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:44.886533 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:45.255415 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:45.384329 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:45.753651 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:45.883542 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:46.255516 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:46.382297 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:46.754440 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:46.882768 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:47.253931 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:47.383795 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:47.754506 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:47.882595 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:48.254332 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:48.383253 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:48.754824 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:48.882313 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:49.253628 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:49.382245 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:49.755581 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:49.882703 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:50.254161 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:50.397553 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:50.761126 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:50.886439 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:51.258543 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:51.383474 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:51.754795 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:51.882228 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:52.254792 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:52.382538 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:52.754377 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:52.883287 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:53.254655 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:53.382333 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:53.754883 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:53.882581 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:54.254856 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:54.382835 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:54.754782 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:54.882221 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:55.254259 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:55.382641 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:55.753779 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:55.882148 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:56.254427 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:56.381759 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:56.754630 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:56.883274 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:57.255042 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:57.382494 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:57.754195 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:57.882524 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:58.253836 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:58.382372 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0915 06:31:58.758367 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:58.882893 8422 kapi.go:107] duration metric: took 52.505409513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0915 06:31:59.253959 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:31:59.754456 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:00.265549 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:00.755547 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:01.254128 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:01.754077 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:02.253874 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:02.758995 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:03.254675 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:03.754249 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:04.253794 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:04.754044 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:05.253945 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:05.754526 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:06.254157 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:06.753899 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:07.254431 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:07.753796 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:08.254232 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:08.753738 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:09.256547 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:09.754651 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:10.253852 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:10.753973 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:11.254402 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:11.754150 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:12.255858 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:12.755873 8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0915 06:32:13.259355 8422 kapi.go:107] duration metric: took 1m10.009898336s to wait for app.kubernetes.io/name=ingress-nginx ...
I0915 06:32:30.106100 8422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0915 06:32:30.106132 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:30.605881 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:31.105279 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:31.604945 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:32.105186 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:32.604231 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:33.105772 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:33.605525 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:34.105268 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:34.604720 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:35.105952 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:35.605051 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:36.105213 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:36.604877 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:37.105460 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:37.605672 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:38.104711 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:38.604819 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:39.104896 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:39.604760 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:40.105539 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:40.605571 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:41.106572 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:41.604977 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:42.114040 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:42.605028 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:43.108929 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:43.604614 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:44.106042 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:44.605810 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:45.106244 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:45.605553 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:46.105844 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:46.604655 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:47.105085 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:47.605377 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:48.105325 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:48.604660 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:49.105787 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:49.604681 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:50.104654 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:50.605443 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:51.105380 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:51.605028 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:52.104483 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:52.605549 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:53.104801 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:53.604454 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:54.105516 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:54.605020 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:55.105247 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:55.605095 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:56.105120 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:56.605298 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:57.106194 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:57.604630 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:58.105810 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:58.605945 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:59.105305 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:32:59.605211 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:00.114101 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:00.606350 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:01.106444 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:01.604475 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:02.105101 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:02.605340 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:03.105069 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:03.604633 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:04.105476 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:04.605013 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:05.104820 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:05.605332 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:06.110691 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:06.604969 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:07.106147 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:07.605552 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:08.105452 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:08.605411 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:09.105470 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:09.605466 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:10.105537 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:10.605666 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:11.105667 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:11.606023 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:12.104777 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:12.604287 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:13.105685 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:13.605265 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:14.105625 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:14.611145 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:15.105781 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:15.604757 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:16.104518 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:16.604853 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:17.105667 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:17.604650 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:18.106331 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:18.605481 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:19.105506 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:19.605343 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:20.104983 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:20.604769 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:21.106150 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:21.604474 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:22.105183 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:22.605320 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:23.106654 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:23.604739 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:24.105163 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:24.605523 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:25.105670 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:25.605502 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:26.105159 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:26.607213 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:27.104849 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:27.605951 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:28.104959 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:28.604540 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:29.105782 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:29.604966 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:30.105838 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:30.604818 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:31.104723 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:31.605196 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:32.105056 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:32.605057 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:33.106344 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:33.605698 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:34.105974 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:34.604782 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:35.105198 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:35.605547 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:36.105873 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:36.605053 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:37.105087 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:37.605114 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:38.106338 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:38.605076 8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0915 06:33:39.106898 8422 kapi.go:107] duration metric: took 2m31.005685267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0915 06:33:39.109602 8422 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-837740 cluster.
I0915 06:33:39.111761 8422 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0915 06:33:39.114067 8422 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0915 06:33:39.116350 8422 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0915 06:33:39.118718 8422 addons.go:510] duration metric: took 2m46.775743472s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0915 06:33:39.118770 8422 start.go:246] waiting for cluster config update ...
I0915 06:33:39.118792 8422 start.go:255] writing updated cluster config ...
I0915 06:33:39.119069 8422 ssh_runner.go:195] Run: rm -f paused
I0915 06:33:39.498467 8422 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
I0915 06:33:39.500857 8422 out.go:177] * Done! kubectl is now configured to use "addons-837740" cluster and "default" namespace by default
==> Docker <==
Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.849914988Z" level=info msg="ignoring event" container=17350eb906526d1cdde2a4d4fd509447f3457a8bb24f7d71a2548c5a64cfc691 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.868192544Z" level=info msg="ignoring event" container=4bee5472e10b4907d0e0d39511e68d8778c6611db384488a4f9eaa2293076903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.888555920Z" level=info msg="ignoring event" container=687f6cd8a45bd6b244da1fd9fdbca26d94e69ee26439ae695f9a29214a50340d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.971459466Z" level=info msg="ignoring event" container=6b294fd05e77d3c960de00f514fea241a5a11c4d6e1604267eefc3fe2820b63a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.978527052Z" level=info msg="ignoring event" container=d5b301403156dc4a6d9b072300791fea1085cbc90d5e1c2b3ec9f62a60b70a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.080604652Z" level=info msg="ignoring event" container=71649d8ca1311f2ebbe2004db2ed56a44df9d3a1989e1f5dd061b056ff1d8698 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.179725320Z" level=info msg="ignoring event" container=69a0efde4f17218f1cd7942ce79ec392f788c7b3c9dc9a6ca86e2a18945aff75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.219264260Z" level=info msg="ignoring event" container=d814e8b2e2b89dc56b69eacfc3c3ef2e4894b563f2b6fdacf2ed20529053a843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:13 addons-837740 cri-dockerd[1542]: time="2024-09-15T06:43:13Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 15 06:43:13 addons-837740 dockerd[1283]: time="2024-09-15T06:43:13.686464163Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 15 06:43:13 addons-837740 dockerd[1283]: time="2024-09-15T06:43:13.689130381Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 15 06:43:14 addons-837740 dockerd[1283]: time="2024-09-15T06:43:14.662954216Z" level=info msg="ignoring event" container=3f46650b6dd268c5a1476ba015e9087e54f5d9b549fca258a34436b53fc8ee9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.427701971Z" level=info msg="ignoring event" container=c2fe5b3d8de6a6cf17e7bbf02209f630d939c687ce57fd78ae95776c9fd94995 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.447995670Z" level=info msg="ignoring event" container=fd3f1ab94bcda2093374499980a9f67c33628b620c0cc4b96803f9472e1a220d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.615581941Z" level=info msg="ignoring event" container=f377f9ac097a400de8a0883500d32b4f6abd638c22aef91a4762ba8350d15710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.636278200Z" level=info msg="ignoring event" container=59a04b328fcdab08a0f4647fd04f0a1fdbaa8d6a9b7c71700c158b3774dc1c49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:23 addons-837740 dockerd[1283]: time="2024-09-15T06:43:23.178957964Z" level=info msg="ignoring event" container=7c0c3036c0a2d5acd2babaadf1462c4e2a9bf95299afd8c21ff6e8aa7178a4d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:23 addons-837740 dockerd[1283]: time="2024-09-15T06:43:23.301935763Z" level=info msg="ignoring event" container=0e1c8d4e0ea0f3728998fe5b6a9ac1ce7b97121e1498881e67e209f008e7f6c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:27 addons-837740 dockerd[1283]: time="2024-09-15T06:43:27.794155287Z" level=info msg="ignoring event" container=9a84052494ec7d2432a715fcf58f2e614975f9f7102d47422ccb553158aea38b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:34 addons-837740 cri-dockerd[1542]: time="2024-09-15T06:43:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1234694c78ad9e8ecd8931c073c64ca75118f5c2dc288e47d54f89b739dc4cf3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 15 06:43:34 addons-837740 dockerd[1283]: time="2024-09-15T06:43:34.477955216Z" level=info msg="ignoring event" container=4aa7443cf5a49d63ffcd3fec8f8c32fa724815130e094fb6da70b4c202f2b193 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.297030233Z" level=info msg="ignoring event" container=3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.360508651Z" level=info msg="ignoring event" container=8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.667250605Z" level=info msg="ignoring event" container=9ba3c3ed633c3368e208ef18ddde9526220e23b6df04d6930e5b1e039bed7dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.788570422Z" level=info msg="ignoring event" container=21b8d568ae181fc9fdc7cd300ac225558db58efea00dfb444abc3db449b38932 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
5d3d92bbe7e73 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 8943506438f1e gcp-auth-89d5ffd79-4vxbx
1515171508f56 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 d339e04ac54e8 ingress-nginx-controller-bc57996ff-9d94n
a84f7b7cff6ad registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 d25b54090e6ce ingress-nginx-admission-patch-tm9xd
cd2acbf476609 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 80bcb67b609a0 ingress-nginx-admission-create-7wph7
5833b76ec193b marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 ccc3387930cfa yakd-dashboard-67d98fc6b-txt6k
5616443de8678 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 a26247b7fd47f local-path-provisioner-86d989889c-ht8wp
07b93ec46d2e7 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 1bd6cea006a5c cloud-spanner-emulator-769b77f747-f96b4
7547215974b9a gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c 12 minutes ago Running minikube-ingress-dns 0 94a8919b20678 kube-ingress-dns-minikube
ac1dad073bd8c nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 9a8d3e0b5c154 nvidia-device-plugin-daemonset-tt4ct
a2553db37f09c ba04bb24b9575 12 minutes ago Running storage-provisioner 0 8f0abf1d61dc9 storage-provisioner
54f5f4f11f36a 2f6c962e7b831 12 minutes ago Running coredns 0 c08d9d16f10f8 coredns-7c65d6cfc9-wglrg
7e4e2e7c9f9d0 24a140c548c07 12 minutes ago Running kube-proxy 0 ed5dd56271f16 kube-proxy-vjdxv
f3f6b32525e6f 27e3830e14027 12 minutes ago Running etcd 0 ca5702a4a99a4 etcd-addons-837740
3696b00b24559 279f381cb3736 12 minutes ago Running kube-controller-manager 0 42747464bcc6f kube-controller-manager-addons-837740
35aa9c1536cf4 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 81f56909c7a3b kube-scheduler-addons-837740
484ea520c5e9c d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 387c88f10c455 kube-apiserver-addons-837740
==> controller_ingress [1515171508f5] <==
I0915 06:32:13.420502 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"3c107480-4a43-4754-9860-5286b822a234", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0915 06:32:13.420792 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1189cc4c-e2e6-41f0-a058-700fc09bd4a4", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0915 06:32:14.596432 7 nginx.go:317] "Starting NGINX process"
I0915 06:32:14.598042 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I0915 06:32:14.599269 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0915 06:32:14.599451 7 controller.go:193] "Configuration changes detected, backend reload required"
I0915 06:32:14.616247 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0915 06:32:14.617568 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-9d94n"
I0915 06:32:14.632730 7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-9d94n" node="addons-837740"
I0915 06:32:14.652133 7 controller.go:213] "Backend successfully reloaded"
I0915 06:32:14.652251 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0915 06:32:14.652667 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0915 06:43:33.271015 7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0915 06:43:33.289273 7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.019s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.019s testedConfigurationSize:18.1kB}
I0915 06:43:33.289325 7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
I0915 06:43:33.296852 7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
I0915 06:43:33.298154 7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"f503ed9e-ee0f-4710-9ee6-c18661713cf2", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2768", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0915 06:43:33.298186 7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0915 06:43:33.298342 7 controller.go:193] "Configuration changes detected, backend reload required"
I0915 06:43:33.355699 7 controller.go:213] "Backend successfully reloaded"
I0915 06:43:33.356341 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0915 06:43:36.631612 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0915 06:43:36.631723 7 controller.go:193] "Configuration changes detected, backend reload required"
I0915 06:43:36.677007 7 controller.go:213] "Backend successfully reloaded"
I0915 06:43:36.677685 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
==> coredns [54f5f4f11f36] <==
[INFO] 10.244.0.7:52374 - 38856 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112664s
[INFO] 10.244.0.7:41098 - 3685 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002214868s
[INFO] 10.244.0.7:41098 - 22115 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001833461s
[INFO] 10.244.0.7:50264 - 59173 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181029s
[INFO] 10.244.0.7:50264 - 64038 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100177s
[INFO] 10.244.0.7:59330 - 5723 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123585s
[INFO] 10.244.0.7:59330 - 863 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000065625s
[INFO] 10.244.0.7:59228 - 30202 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010752s
[INFO] 10.244.0.7:59228 - 61414 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034749s
[INFO] 10.244.0.7:47583 - 13835 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079123s
[INFO] 10.244.0.7:47583 - 11276 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038121s
[INFO] 10.244.0.7:36593 - 33327 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001495376s
[INFO] 10.244.0.7:36593 - 5155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001127295s
[INFO] 10.244.0.7:54986 - 56486 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000059085s
[INFO] 10.244.0.7:54986 - 5017 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046958s
[INFO] 10.244.0.25:40158 - 62681 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004309976s
[INFO] 10.244.0.25:49141 - 35801 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000589662s
[INFO] 10.244.0.25:33254 - 12597 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185747s
[INFO] 10.244.0.25:42529 - 49710 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226536s
[INFO] 10.244.0.25:53656 - 22803 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130101s
[INFO] 10.244.0.25:59620 - 40297 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000251545s
[INFO] 10.244.0.25:37456 - 19181 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007152876s
[INFO] 10.244.0.25:49040 - 6923 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007937705s
[INFO] 10.244.0.25:57749 - 58328 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00205546s
[INFO] 10.244.0.25:33131 - 7916 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002060276s
==> describe nodes <==
Name: addons-837740
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-837740
kubernetes.io/os=linux
minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
minikube.k8s.io/name=addons-837740
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_15T06_30_48_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-837740
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 15 Sep 2024 06:30:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-837740
AcquireTime: <unset>
RenewTime: Sun, 15 Sep 2024 06:43:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 15 Sep 2024 06:39:28 +0000 Sun, 15 Sep 2024 06:30:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 15 Sep 2024 06:39:28 +0000 Sun, 15 Sep 2024 06:30:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 15 Sep 2024 06:39:28 +0000 Sun, 15 Sep 2024 06:30:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 15 Sep 2024 06:39:28 +0000 Sun, 15 Sep 2024 06:30:45 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-837740
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
System Info:
Machine ID: 0f74f9ae52494a289584ca577801b569
System UUID: b1422836-2ae3-412b-9023-174332602f9a
Boot ID: 72fc410e-b80c-4eb1-a965-d925e9faaac6
Kernel Version: 5.15.0-1069-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default cloud-spanner-emulator-769b77f747-f96b4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s
gcp-auth gcp-auth-89d5ffd79-4vxbx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-9d94n 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-wglrg 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-837740 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-837740 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-837740 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-vjdxv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-837740 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-tt4ct 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-ht8wp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-txt6k 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 388Mi (4%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-837740 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-837740 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-837740 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-837740 event: Registered Node addons-837740 in Controller
==> dmesg <==
[Sep15 06:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015640] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.462572] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.788358] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +5.780901] kauditd_printk_skb: 36 callbacks suppressed
[Sep15 06:33] hrtimer: interrupt took 17557708 ns
==> etcd [f3f6b32525e6] <==
{"level":"info","ts":"2024-09-15T06:30:40.850517Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-09-15T06:30:40.850538Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-09-15T06:30:41.682041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-15T06:30:41.682266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-15T06:30:41.682387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-15T06:30:41.682493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-15T06:30:41.682579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-15T06:30:41.682688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-15T06:30:41.682779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-15T06:30:41.686129Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-15T06:30:41.690285Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-837740 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-15T06:30:41.691718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-15T06:30:41.692197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-15T06:30:41.692544Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-15T06:30:41.692643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-15T06:30:41.693528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-15T06:30:41.693777Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-15T06:30:41.695531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-15T06:30:41.695584Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-15T06:30:41.703567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-15T06:30:41.703622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-15T06:30:41.718640Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-15T06:40:42.356221Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1838}
{"level":"info","ts":"2024-09-15T06:40:42.404320Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1838,"took":"46.880111ms","hash":3365489726,"current-db-size-bytes":8634368,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4833280,"current-db-size-in-use":"4.8 MB"}
{"level":"info","ts":"2024-09-15T06:40:42.404379Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3365489726,"revision":1838,"compact-revision":-1}
==> gcp-auth [5d3d92bbe7e7] <==
2024/09/15 06:33:38 GCP Auth Webhook started!
2024/09/15 06:33:55 Ready to marshal response ...
2024/09/15 06:33:55 Ready to write response ...
2024/09/15 06:33:56 Ready to marshal response ...
2024/09/15 06:33:56 Ready to write response ...
2024/09/15 06:34:19 Ready to marshal response ...
2024/09/15 06:34:19 Ready to write response ...
2024/09/15 06:34:19 Ready to marshal response ...
2024/09/15 06:34:19 Ready to write response ...
2024/09/15 06:34:20 Ready to marshal response ...
2024/09/15 06:34:20 Ready to write response ...
2024/09/15 06:42:31 Ready to marshal response ...
2024/09/15 06:42:31 Ready to write response ...
2024/09/15 06:42:34 Ready to marshal response ...
2024/09/15 06:42:34 Ready to write response ...
2024/09/15 06:43:00 Ready to marshal response ...
2024/09/15 06:43:00 Ready to write response ...
2024/09/15 06:43:33 Ready to marshal response ...
2024/09/15 06:43:33 Ready to write response ...
==> kernel <==
06:43:37 up 26 min, 0 users, load average: 1.97, 0.92, 0.67
Linux addons-837740 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [484ea520c5e9] <==
W0915 06:34:11.074448 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0915 06:34:11.360354 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0915 06:34:11.426508 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0915 06:34:11.593677 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0915 06:34:11.785503 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0915 06:34:11.836913 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0915 06:34:12.238450 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0915 06:42:39.591684 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0915 06:43:16.110927 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0915 06:43:16.110983 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0915 06:43:16.152000 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0915 06:43:16.152055 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0915 06:43:16.156280 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0915 06:43:16.156397 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0915 06:43:16.190665 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0915 06:43:16.190808 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0915 06:43:16.317804 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0915 06:43:16.318280 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0915 06:43:17.157374 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0915 06:43:17.318895 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0915 06:43:17.333488 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0915 06:43:27.693696 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0915 06:43:28.748518 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0915 06:43:33.290453 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0915 06:43:33.605359 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.175.91"}
==> kube-controller-manager [3696b00b2455] <==
I0915 06:43:22.062016 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="21.981µs"
I0915 06:43:22.829069 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0915 06:43:22.829112 1 shared_informer.go:320] Caches are synced for resource quota
I0915 06:43:23.256597 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0915 06:43:23.256643 1 shared_informer.go:320] Caches are synced for garbage collector
W0915 06:43:26.465209 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:26.465259 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:27.051654 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:27.051696 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:27.232416 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:27.232461 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
E0915 06:43:28.750395 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:29.597549 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:29.597593 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:32.056704 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:32.056747 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:34.510668 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:34.510713 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:35.045827 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:35.045884 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0915 06:43:35.135593 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.108µs"
W0915 06:43:35.780147 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:35.780187 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0915 06:43:36.256410 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0915 06:43:36.256455 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [7e4e2e7c9f9d] <==
I0915 06:30:54.110522 1 server_linux.go:66] "Using iptables proxy"
I0915 06:30:54.228484 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0915 06:30:54.228552 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0915 06:30:54.271010 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0915 06:30:54.271085 1 server_linux.go:169] "Using iptables Proxier"
I0915 06:30:54.273174 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0915 06:30:54.273466 1 server.go:483] "Version info" version="v1.31.1"
I0915 06:30:54.273478 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0915 06:30:54.275150 1 config.go:199] "Starting service config controller"
I0915 06:30:54.275170 1 shared_informer.go:313] Waiting for caches to sync for service config
I0915 06:30:54.275194 1 config.go:105] "Starting endpoint slice config controller"
I0915 06:30:54.275198 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0915 06:30:54.275675 1 config.go:328] "Starting node config controller"
I0915 06:30:54.275682 1 shared_informer.go:313] Waiting for caches to sync for node config
I0915 06:30:54.375589 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0915 06:30:54.375639 1 shared_informer.go:320] Caches are synced for service config
I0915 06:30:54.376607 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [35aa9c1536cf] <==
W0915 06:30:45.488004 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 06:30:45.488146 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0915 06:30:45.488389 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0915 06:30:45.488476 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 06:30:45.488663 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0915 06:30:45.489141 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 06:30:45.489320 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0915 06:30:45.489239 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0915 06:30:45.489477 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0915 06:30:45.488517 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.316465 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0915 06:30:46.316512 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.382777 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 06:30:46.382823 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.432959 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 06:30:46.433020 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.477070 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 06:30:46.477123 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.573484 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0915 06:30:46.573702 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.595578 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 06:30:46.595873 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0915 06:30:46.799372 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0915 06:30:46.799629 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0915 06:30:48.745611 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.542808 2337 memory_manager.go:354] "RemoveStaleState removing state" podUID="18829192-e1c9-489b-adf6-ecbd1ec662c8" containerName="gadget"
Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.654134 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/42398e49-2009-4739-b884-15187314ed39-gcp-creds\") pod \"nginx\" (UID: \"42398e49-2009-4739-b884-15187314ed39\") " pod="default/nginx"
Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.654182 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrlq\" (UniqueName: \"kubernetes.io/projected/42398e49-2009-4739-b884-15187314ed39-kube-api-access-prrlq\") pod \"nginx\" (UID: \"42398e49-2009-4739-b884-15187314ed39\") " pod="default/nginx"
Sep 15 06:43:33 addons-837740 kubelet[2337]: E0915 06:43:33.920255 2337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="893b7539-d0c9-4122-bcc7-7fcac741c78e"
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670769 2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqj9c\" (UniqueName: \"kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c\") pod \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\" (UID: \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\") "
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670828 2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds\") pod \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\" (UID: \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\") "
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670931 2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" (UID: "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.672762 2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c" (OuterVolumeSpecName: "kube-api-access-gqj9c") pod "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" (UID: "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121"). InnerVolumeSpecName "kube-api-access-gqj9c". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.772744 2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gqj9c\" (UniqueName: \"kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c\") on node \"addons-837740\" DevicePath \"\""
Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.772775 2337 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds\") on node \"addons-837740\" DevicePath \"\""
Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.884136 2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgxx\" (UniqueName: \"kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx\") pod \"1a2130f7-6cbe-4a8b-bea3-e3e4436003d2\" (UID: \"1a2130f7-6cbe-4a8b-bea3-e3e4436003d2\") "
Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.890011 2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx" (OuterVolumeSpecName: "kube-api-access-wvgxx") pod "1a2130f7-6cbe-4a8b-bea3-e3e4436003d2" (UID: "1a2130f7-6cbe-4a8b-bea3-e3e4436003d2"). InnerVolumeSpecName "kube-api-access-wvgxx". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.918341 2337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" path="/var/lib/kubelet/pods/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121/volumes"
Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.984783 2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wvgxx\" (UniqueName: \"kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx\") on node \"addons-837740\" DevicePath \"\""
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.086022 2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdl67\" (UniqueName: \"kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67\") pod \"53474271-c9f2-4050-bf68-df5e1935aa85\" (UID: \"53474271-c9f2-4050-bf68-df5e1935aa85\") "
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.088791 2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67" (OuterVolumeSpecName: "kube-api-access-kdl67") pod "53474271-c9f2-4050-bf68-df5e1935aa85" (UID: "53474271-c9f2-4050-bf68-df5e1935aa85"). InnerVolumeSpecName "kube-api-access-kdl67". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.186662 2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kdl67\" (UniqueName: \"kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67\") on node \"addons-837740\" DevicePath \"\""
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.269567 2337 scope.go:117] "RemoveContainer" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.322097 2337 scope.go:117] "RemoveContainer" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
Sep 15 06:43:36 addons-837740 kubelet[2337]: E0915 06:43:36.323290 2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.323335 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"} err="failed to get container status \"3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.323362 2337 scope.go:117] "RemoveContainer" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.346041 2337 scope.go:117] "RemoveContainer" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
Sep 15 06:43:36 addons-837740 kubelet[2337]: E0915 06:43:36.347177 2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.347224 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"} err="failed to get container status \"8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
==> storage-provisioner [a2553db37f09] <==
I0915 06:31:00.001266 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 06:31:00.020062 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 06:31:00.020131 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 06:31:00.031775 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 06:31:00.032198 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab47c2fe-346c-4436-a442-df209c167d0c", APIVersion:"v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62 became leader
I0915 06:31:00.032234 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62!
I0915 06:31:00.133248 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-837740 -n addons-837740
helpers_test.go:261: (dbg) Run: kubectl --context addons-837740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd: exit status 1 (104.928313ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-837740/192.168.49.2
Start Time: Sun, 15 Sep 2024 06:34:20 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6wcd (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-l6wcd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m18s default-scheduler Successfully assigned default/busybox to addons-837740
Normal Pulling 7m54s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m53s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m53s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m42s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m14s (x21 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7wph7" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-tm9xd" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.99s)