=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.789209ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
I0920 16:56:26.914684 15398 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:56:26.914709 15398 kapi.go:107] duration metric: took 4.06511ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003069847s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002754231s
addons_test.go:338: (dbg) Run: kubectl --context addons-205029 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07354132s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p addons-205029 ip
2024/09/20 16:57:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p addons-205029 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-205029
helpers_test.go:235: (dbg) docker inspect addons-205029:
-- stdout --
[
{
"Id": "6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca",
"Created": "2024-09-20T16:44:35.267135562Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 17526,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-20T16:44:35.406417505Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
"ResolvConfPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/hostname",
"HostsPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/hosts",
"LogPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca-json.log",
"Name": "/addons-205029",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"addons-205029:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "addons-205029",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4-init/diff:/var/lib/docker/overlay2/04d8ee2bca91b716c0fbed8d5cf8682c2b84f5613656c8faad7ce3474f9e857f/diff",
"MergedDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/merged",
"UpperDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/diff",
"WorkDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-205029",
"Source": "/var/lib/docker/volumes/addons-205029/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-205029",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-205029",
"name.minikube.sigs.k8s.io": "addons-205029",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "baa298c4d59335be6917fea60d58f068d7ff318b3df17c4ffd8dbc5b5bfcf92e",
"SandboxKey": "/var/run/docker/netns/baa298c4d593",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-205029": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "9bda61730e6b3c6514aae8f9b88bc36015ae46024cb4ddff1d942a33513e91cf",
"EndpointID": "9fde9921025335b37e01768dd34b10b097dbc89411267e8b19d37f84bd600ccb",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-205029",
"6ba2b186673d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-205029 -n addons-205029
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-205029 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | --download-only -p | download-docker-226389 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | download-docker-226389 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-226389 | download-docker-226389 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| start | --download-only -p | binary-mirror-950195 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | binary-mirror-950195 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:35633 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-950195 | binary-mirror-950195 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| addons | disable dashboard -p | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | addons-205029 | | | | | |
| addons | enable dashboard -p | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | addons-205029 | | | | | |
| start | -p addons-205029 --wait=true | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:48 UTC | 20 Sep 24 16:48 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-205029 addons | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | -p addons-205029 | | | | | |
| addons | disable cloud-spanner -p | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | addons-205029 | | | | | |
| addons | enable headlamp | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | -p addons-205029 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-205029 ssh cat | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | /opt/local-path-provisioner/pvc-d6bd4afe-8bba-4f86-86d7-a230517a8194_default_test-pvc/file1 | | | | | |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | addons-205029 | | | | | |
| addons | addons-205029 addons | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-205029 addons | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-205029 ssh curl -s | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-205029 ip | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | ingress --alsologtostderr -v=1 | | | | | |
| ip | addons-205029 ip | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| addons | addons-205029 addons disable | addons-205029 | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 16:44:13
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 16:44:13.479072 16774 out.go:345] Setting OutFile to fd 1 ...
I0920 16:44:13.479186 16774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:13.479194 16774 out.go:358] Setting ErrFile to fd 2...
I0920 16:44:13.479199 16774 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:13.479394 16774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 16:44:13.480001 16774 out.go:352] Setting JSON to false
I0920 16:44:13.480865 16774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1597,"bootTime":1726849056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0920 16:44:13.480970 16774 start.go:139] virtualization: kvm guest
I0920 16:44:13.483255 16774 out.go:177] * [addons-205029] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0920 16:44:13.484878 16774 out.go:177] - MINIKUBE_LOCATION=19672
I0920 16:44:13.484899 16774 notify.go:220] Checking for updates...
I0920 16:44:13.487980 16774 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 16:44:13.489505 16774 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
I0920 16:44:13.490982 16774 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
I0920 16:44:13.492311 16774 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 16:44:13.493655 16774 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 16:44:13.495342 16774 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 16:44:13.519824 16774 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
I0920 16:44:13.519933 16774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0920 16:44:13.565776 16774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:44:13.55641081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0920 16:44:13.565884 16774 docker.go:318] overlay module found
I0920 16:44:13.567781 16774 out.go:177] * Using the docker driver based on user configuration
I0920 16:44:13.569278 16774 start.go:297] selected driver: docker
I0920 16:44:13.569297 16774 start.go:901] validating driver "docker" against <nil>
I0920 16:44:13.569312 16774 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 16:44:13.570093 16774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0920 16:44:13.616950 16774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:44:13.608060045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0920 16:44:13.617152 16774 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 16:44:13.617418 16774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:44:13.619145 16774 out.go:177] * Using Docker driver with root privileges
I0920 16:44:13.620576 16774 cni.go:84] Creating CNI manager for ""
I0920 16:44:13.620667 16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:13.620683 16774 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 16:44:13.620762 16774 start.go:340] cluster config:
{Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:13.622133 16774 out.go:177] * Starting "addons-205029" primary control-plane node in "addons-205029" cluster
I0920 16:44:13.623665 16774 cache.go:121] Beginning downloading kic base image for docker with docker
I0920 16:44:13.625122 16774 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
I0920 16:44:13.626588 16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:44:13.626636 16774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I0920 16:44:13.626651 16774 cache.go:56] Caching tarball of preloaded images
I0920 16:44:13.626702 16774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
I0920 16:44:13.626729 16774 preload.go:172] Found /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0920 16:44:13.626737 16774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 16:44:13.627073 16774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json ...
I0920 16:44:13.627099 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json: {Name:mk3df41d227938ff6bc2c2917ae2860a5ae8fb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:13.642943 16774 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
I0920 16:44:13.643086 16774 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
I0920 16:44:13.643108 16774 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
I0920 16:44:13.643112 16774 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
I0920 16:44:13.643120 16774 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
I0920 16:44:13.643125 16774 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
I0920 16:44:25.835992 16774 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
I0920 16:44:25.836030 16774 cache.go:194] Successfully downloaded all kic artifacts
I0920 16:44:25.836077 16774 start.go:360] acquireMachinesLock for addons-205029: {Name:mk9021422c05f4629eb9257457a8fcc06e3f877b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 16:44:25.836172 16774 start.go:364] duration metric: took 76.433µs to acquireMachinesLock for "addons-205029"
I0920 16:44:25.836194 16774 start.go:93] Provisioning new machine with config: &{Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 16:44:25.836266 16774 start.go:125] createHost starting for "" (driver="docker")
I0920 16:44:25.838901 16774 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0920 16:44:25.839156 16774 start.go:159] libmachine.API.Create for "addons-205029" (driver="docker")
I0920 16:44:25.839191 16774 client.go:168] LocalClient.Create starting
I0920 16:44:25.839303 16774 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem
I0920 16:44:26.077196 16774 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem
I0920 16:44:26.280201 16774 cli_runner.go:164] Run: docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 16:44:26.296211 16774 cli_runner.go:211] docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 16:44:26.296295 16774 network_create.go:284] running [docker network inspect addons-205029] to gather additional debugging logs...
I0920 16:44:26.296319 16774 cli_runner.go:164] Run: docker network inspect addons-205029
W0920 16:44:26.311340 16774 cli_runner.go:211] docker network inspect addons-205029 returned with exit code 1
I0920 16:44:26.311371 16774 network_create.go:287] error running [docker network inspect addons-205029]: docker network inspect addons-205029: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-205029 not found
I0920 16:44:26.311382 16774 network_create.go:289] output of [docker network inspect addons-205029]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-205029 not found
** /stderr **
I0920 16:44:26.311469 16774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 16:44:26.327244 16774 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b1a780}
I0920 16:44:26.327288 16774 network_create.go:124] attempt to create docker network addons-205029 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0920 16:44:26.327329 16774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-205029 addons-205029
I0920 16:44:26.388062 16774 network_create.go:108] docker network addons-205029 192.168.49.0/24 created
I0920 16:44:26.388087 16774 kic.go:121] calculated static IP "192.168.49.2" for the "addons-205029" container
I0920 16:44:26.388154 16774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0920 16:44:26.404467 16774 cli_runner.go:164] Run: docker volume create addons-205029 --label name.minikube.sigs.k8s.io=addons-205029 --label created_by.minikube.sigs.k8s.io=true
I0920 16:44:26.421456 16774 oci.go:103] Successfully created a docker volume addons-205029
I0920 16:44:26.421532 16774 cli_runner.go:164] Run: docker run --rm --name addons-205029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --entrypoint /usr/bin/test -v addons-205029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
I0920 16:44:31.241695 16774 cli_runner.go:217] Completed: docker run --rm --name addons-205029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --entrypoint /usr/bin/test -v addons-205029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (4.820124605s)
I0920 16:44:31.241722 16774 oci.go:107] Successfully prepared a docker volume addons-205029
I0920 16:44:31.241737 16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:44:31.241757 16774 kic.go:194] Starting extracting preloaded images to volume ...
I0920 16:44:31.241819 16774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
I0920 16:44:35.206249 16774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.964388864s)
I0920 16:44:35.206282 16774 kic.go:203] duration metric: took 3.964520827s to extract preloaded images to volume ...
W0920 16:44:35.206418 16774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0920 16:44:35.206533 16774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0920 16:44:35.252143 16774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-205029 --name addons-205029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-205029 --network addons-205029 --ip 192.168.49.2 --volume addons-205029:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
I0920 16:44:35.582140 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Running}}
I0920 16:44:35.599543 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:35.617933 16774 cli_runner.go:164] Run: docker exec addons-205029 stat /var/lib/dpkg/alternatives/iptables
I0920 16:44:35.661912 16774 oci.go:144] the created container "addons-205029" has a running status.
I0920 16:44:35.661938 16774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa...
I0920 16:44:35.889519 16774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0920 16:44:35.913888 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:35.929899 16774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0920 16:44:35.929918 16774 kic_runner.go:114] Args: [docker exec --privileged addons-205029 chown docker:docker /home/docker/.ssh/authorized_keys]
I0920 16:44:35.978911 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:35.995802 16774 machine.go:93] provisionDockerMachine start ...
I0920 16:44:35.995886 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:36.013440 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:36.013632 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:36.013644 16774 main.go:141] libmachine: About to run SSH command:
hostname
I0920 16:44:36.206524 16774 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205029
I0920 16:44:36.206551 16774 ubuntu.go:169] provisioning hostname "addons-205029"
I0920 16:44:36.206605 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:36.223600 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:36.223787 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:36.223809 16774 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-205029 && echo "addons-205029" | sudo tee /etc/hostname
I0920 16:44:36.369133 16774 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205029
I0920 16:44:36.369207 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:36.385758 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:36.385954 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:36.385973 16774 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-205029' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-205029/g' /etc/hosts;
else
echo '127.0.1.1 addons-205029' | sudo tee -a /etc/hosts;
fi
fi
I0920 16:44:36.514906 16774 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 16:44:36.514931 16774 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8616/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8616/.minikube}
I0920 16:44:36.515001 16774 ubuntu.go:177] setting up certificates
I0920 16:44:36.515013 16774 provision.go:84] configureAuth start
I0920 16:44:36.515084 16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
I0920 16:44:36.531544 16774 provision.go:143] copyHostCerts
I0920 16:44:36.531616 16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem (1082 bytes)
I0920 16:44:36.531745 16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem (1123 bytes)
I0920 16:44:36.531812 16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem (1679 bytes)
I0920 16:44:36.531874 16774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem org=jenkins.addons-205029 san=[127.0.0.1 192.168.49.2 addons-205029 localhost minikube]
I0920 16:44:36.667019 16774 provision.go:177] copyRemoteCerts
I0920 16:44:36.667075 16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0920 16:44:36.667111 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:36.683532 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:36.778950 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0920 16:44:36.799356 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0920 16:44:36.819696 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0920 16:44:36.840152 16774 provision.go:87] duration metric: took 325.125435ms to configureAuth
I0920 16:44:36.840173 16774 ubuntu.go:193] setting minikube options for container-runtime
I0920 16:44:36.840311 16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:44:36.840350 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:36.857247 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:36.857441 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:36.857456 16774 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0920 16:44:36.983454 16774 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0920 16:44:36.983474 16774 ubuntu.go:71] root file system type: overlay
I0920 16:44:36.983595 16774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0920 16:44:36.983650 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:37.000023 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:37.000216 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:37.000304 16774 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0920 16:44:37.137225 16774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0920 16:44:37.137307 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:37.153537 16774 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:37.153718 16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0920 16:44:37.153735 16774 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0920 16:44:37.856452 16774 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-19 14:24:32.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-20 16:44:37.134445546 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0920 16:44:37.856489 16774 machine.go:96] duration metric: took 1.860663563s to provisionDockerMachine
I0920 16:44:37.856501 16774 client.go:171] duration metric: took 12.017302418s to LocalClient.Create
I0920 16:44:37.856521 16774 start.go:167] duration metric: took 12.01736583s to libmachine.API.Create "addons-205029"
I0920 16:44:37.856531 16774 start.go:293] postStartSetup for "addons-205029" (driver="docker")
I0920 16:44:37.856546 16774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 16:44:37.856612 16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 16:44:37.856657 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:37.872895 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:37.963792 16774 ssh_runner.go:195] Run: cat /etc/os-release
I0920 16:44:37.966776 16774 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0920 16:44:37.966802 16774 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0920 16:44:37.966811 16774 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0920 16:44:37.966821 16774 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0920 16:44:37.966833 16774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/addons for local assets ...
I0920 16:44:37.966893 16774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/files for local assets ...
I0920 16:44:37.966917 16774 start.go:296] duration metric: took 110.378581ms for postStartSetup
I0920 16:44:37.967194 16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
I0920 16:44:37.983269 16774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json ...
I0920 16:44:37.983512 16774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0920 16:44:37.983548 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:38.000043 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:38.087578 16774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0920 16:44:38.091688 16774 start.go:128] duration metric: took 12.255409328s to createHost
I0920 16:44:38.091708 16774 start.go:83] releasing machines lock for "addons-205029", held for 12.255526508s
I0920 16:44:38.091773 16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
I0920 16:44:38.107666 16774 ssh_runner.go:195] Run: cat /version.json
I0920 16:44:38.107722 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:38.107737 16774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0920 16:44:38.107810 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:38.125566 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:38.126841 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:38.285838 16774 ssh_runner.go:195] Run: systemctl --version
I0920 16:44:38.289936 16774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0920 16:44:38.293954 16774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0920 16:44:38.316287 16774 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0920 16:44:38.316343 16774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 16:44:38.341889 16774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0920 16:44:38.341916 16774 start.go:495] detecting cgroup driver to use...
I0920 16:44:38.341946 16774 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 16:44:38.342058 16774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 16:44:38.356646 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0920 16:44:38.365911 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0920 16:44:38.375287 16774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0920 16:44:38.375345 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0920 16:44:38.384926 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 16:44:38.394150 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0920 16:44:38.403040 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 16:44:38.412257 16774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 16:44:38.421005 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0920 16:44:38.430219 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0920 16:44:38.439566 16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0920 16:44:38.448832 16774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 16:44:38.456655 16774 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0920 16:44:38.456712 16774 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0920 16:44:38.469843 16774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 16:44:38.478161 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:38.553464 16774 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0920 16:44:38.641249 16774 start.go:495] detecting cgroup driver to use...
I0920 16:44:38.641297 16774 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 16:44:38.641336 16774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0920 16:44:38.652129 16774 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0920 16:44:38.652189 16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0920 16:44:38.663581 16774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 16:44:38.679578 16774 ssh_runner.go:195] Run: which cri-dockerd
I0920 16:44:38.682948 16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0920 16:44:38.692617 16774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0920 16:44:38.711087 16774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0920 16:44:38.790494 16774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0920 16:44:38.885768 16774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0920 16:44:38.885894 16774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0920 16:44:38.902460 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:38.982503 16774 ssh_runner.go:195] Run: sudo systemctl restart docker
I0920 16:44:39.237549 16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0920 16:44:39.248309 16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 16:44:39.259122 16774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0920 16:44:39.336409 16774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0920 16:44:39.411133 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:39.487678 16774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0920 16:44:39.499466 16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 16:44:39.508899 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:39.584378 16774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0920 16:44:39.644513 16774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0920 16:44:39.644596 16774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0920 16:44:39.648006 16774 start.go:563] Will wait 60s for crictl version
I0920 16:44:39.648048 16774 ssh_runner.go:195] Run: which crictl
I0920 16:44:39.651108 16774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0920 16:44:39.681795 16774 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.0
RuntimeApiVersion: v1
I0920 16:44:39.681855 16774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0920 16:44:39.704047 16774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0920 16:44:39.730171 16774 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
I0920 16:44:39.730250 16774 cli_runner.go:164] Run: docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 16:44:39.747110 16774 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0920 16:44:39.750457 16774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 16:44:39.760226 16774 kubeadm.go:883] updating cluster {Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 16:44:39.760331 16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:44:39.760376 16774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0920 16:44:39.778278 16774 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0920 16:44:39.778298 16774 docker.go:615] Images already preloaded, skipping extraction
I0920 16:44:39.778356 16774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0920 16:44:39.796624 16774 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0920 16:44:39.796666 16774 cache_images.go:84] Images are preloaded, skipping loading
I0920 16:44:39.796676 16774 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0920 16:44:39.796772 16774 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-205029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0920 16:44:39.796836 16774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0920 16:44:39.839054 16774 cni.go:84] Creating CNI manager for ""
I0920 16:44:39.839088 16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:39.839098 16774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 16:44:39.839117 16774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-205029 NodeName:addons-205029 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 16:44:39.839235 16774 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-205029"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 16:44:39.839287 16774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 16:44:39.847381 16774 binaries.go:44] Found k8s binaries, skipping transfer
I0920 16:44:39.847443 16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 16:44:39.855519 16774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0920 16:44:39.870882 16774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 16:44:39.886343 16774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0920 16:44:39.902318 16774 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0920 16:44:39.905578 16774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 16:44:39.915155 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:39.989270 16774 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 16:44:40.001702 16774 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029 for IP: 192.168.49.2
I0920 16:44:40.001723 16774 certs.go:194] generating shared ca certs ...
I0920 16:44:40.001745 16774 certs.go:226] acquiring lock for ca certs: {Name:mk7859bcc6bcc87de2e2da04bdba4ac21b3ab143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.001867 16774 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key
I0920 16:44:40.249259 16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt ...
I0920 16:44:40.249287 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt: {Name:mk44a784a15cda94cf26c63cfd7e14aa1f1132b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.249459 16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key ...
I0920 16:44:40.249471 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key: {Name:mkfca71425b22ed5e73544af15493c3cf339d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.249541 16774 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key
I0920 16:44:40.404491 16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt ...
I0920 16:44:40.404519 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt: {Name:mk78c3531f6cec4a6da2c3ff045ac0c1be8662b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.404677 16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key ...
I0920 16:44:40.404688 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key: {Name:mk47981bbe3a26551f13bf7ccae25f4674a14e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.404765 16774 certs.go:256] generating profile certs ...
I0920 16:44:40.404815 16774 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key
I0920 16:44:40.404826 16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt with IP's: []
I0920 16:44:40.489727 16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt ...
I0920 16:44:40.489760 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: {Name:mk1cb9d534fa0209713ec74aa58d9a7a8da5c7e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.489932 16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key ...
I0920 16:44:40.489942 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key: {Name:mkc57a397f86b96efb60565f7dfd38ac2ddd4de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.490015 16774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e
I0920 16:44:40.490033 16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0920 16:44:40.666783 16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e ...
I0920 16:44:40.666814 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e: {Name:mkc84256309f9bc8986ecaf3e3ff5e2e1ceb68a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.666989 16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e ...
I0920 16:44:40.667002 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e: {Name:mkca7096c26a1de58e29d211308975f671f2b850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.667074 16774 certs.go:381] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt
I0920 16:44:40.667144 16774 certs.go:385] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key
I0920 16:44:40.667196 16774 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key
I0920 16:44:40.667214 16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt with IP's: []
I0920 16:44:40.752763 16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt ...
I0920 16:44:40.752794 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt: {Name:mk050a31d02d8979f4fe0e44c7f315005f69edf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.752957 16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key ...
I0920 16:44:40.752969 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key: {Name:mkc9a5c83731e76d42012d2048235cd283ee8d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:40.753118 16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem (1679 bytes)
I0920 16:44:40.753151 16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem (1082 bytes)
I0920 16:44:40.753174 16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem (1123 bytes)
I0920 16:44:40.753195 16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem (1679 bytes)
I0920 16:44:40.753729 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 16:44:40.775680 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0920 16:44:40.797382 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 16:44:40.819954 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0920 16:44:40.841745 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0920 16:44:40.863159 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0920 16:44:40.884669 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 16:44:40.905865 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0920 16:44:40.928562 16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 16:44:40.950437 16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 16:44:40.966146 16774 ssh_runner.go:195] Run: openssl version
I0920 16:44:40.971368 16774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 16:44:40.980403 16774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:40.983824 16774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:40.983880 16774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:40.990237 16774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 16:44:40.999014 16774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 16:44:41.002181 16774 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 16:44:41.002225 16774 kubeadm.go:392] StartCluster: {Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:41.002316 16774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0920 16:44:41.019562 16774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 16:44:41.028127 16774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 16:44:41.036685 16774 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0920 16:44:41.036754 16774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 16:44:41.045196 16774 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 16:44:41.045220 16774 kubeadm.go:157] found existing configuration files:
I0920 16:44:41.045270 16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 16:44:41.053705 16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 16:44:41.053760 16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 16:44:41.062016 16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 16:44:41.070473 16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 16:44:41.070535 16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 16:44:41.078479 16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 16:44:41.086908 16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 16:44:41.086995 16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 16:44:41.095025 16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 16:44:41.103447 16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 16:44:41.103518 16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 16:44:41.111319 16774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0920 16:44:41.147076 16774 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 16:44:41.147152 16774 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 16:44:41.166636 16774 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0920 16:44:41.166726 16774 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1069-gcp[0m
I0920 16:44:41.166759 16774 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0920 16:44:41.166800 16774 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0920 16:44:41.166842 16774 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0920 16:44:41.166886 16774 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0920 16:44:41.166928 16774 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0920 16:44:41.166989 16774 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0920 16:44:41.167083 16774 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0920 16:44:41.167176 16774 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0920 16:44:41.167248 16774 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0920 16:44:41.167313 16774 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0920 16:44:41.214923 16774 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 16:44:41.215063 16774 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 16:44:41.215226 16774 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 16:44:41.224975 16774 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 16:44:41.228035 16774 out.go:235] - Generating certificates and keys ...
I0920 16:44:41.228137 16774 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 16:44:41.228198 16774 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 16:44:41.352731 16774 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 16:44:41.559862 16774 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 16:44:41.760049 16774 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 16:44:41.947017 16774 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 16:44:42.023472 16774 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 16:44:42.023634 16774 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-205029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0920 16:44:42.210939 16774 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 16:44:42.211100 16774 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-205029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0920 16:44:42.399366 16774 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 16:44:42.617900 16774 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 16:44:42.701698 16774 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 16:44:42.701792 16774 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 16:44:42.814142 16774 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 16:44:42.955822 16774 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 16:44:43.055761 16774 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 16:44:43.154415 16774 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 16:44:43.366002 16774 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 16:44:43.366399 16774 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 16:44:43.368826 16774 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 16:44:43.371120 16774 out.go:235] - Booting up control plane ...
I0920 16:44:43.371226 16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 16:44:43.371305 16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 16:44:43.371390 16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 16:44:43.380738 16774 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 16:44:43.386150 16774 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 16:44:43.386221 16774 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 16:44:43.467512 16774 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 16:44:43.467659 16774 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 16:44:43.968987 16774 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.598165ms
I0920 16:44:43.969098 16774 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 16:44:48.471004 16774 kubeadm.go:310] [api-check] The API server is healthy after 4.501946786s
I0920 16:44:48.482108 16774 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 16:44:48.492370 16774 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 16:44:48.509047 16774 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 16:44:48.509312 16774 kubeadm.go:310] [mark-control-plane] Marking the node addons-205029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 16:44:48.516108 16774 kubeadm.go:310] [bootstrap-token] Using token: ss9buj.0c6u12p1td4a48ak
I0920 16:44:48.517562 16774 out.go:235] - Configuring RBAC rules ...
I0920 16:44:48.517706 16774 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 16:44:48.520397 16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 16:44:48.526073 16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 16:44:48.528367 16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 16:44:48.530575 16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 16:44:48.533852 16774 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 16:44:48.876548 16774 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 16:44:49.299932 16774 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 16:44:49.877603 16774 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 16:44:49.878389 16774 kubeadm.go:310]
I0920 16:44:49.878480 16774 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 16:44:49.878492 16774 kubeadm.go:310]
I0920 16:44:49.878586 16774 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 16:44:49.878595 16774 kubeadm.go:310]
I0920 16:44:49.878623 16774 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 16:44:49.878726 16774 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 16:44:49.878815 16774 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 16:44:49.878825 16774 kubeadm.go:310]
I0920 16:44:49.878961 16774 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 16:44:49.879013 16774 kubeadm.go:310]
I0920 16:44:49.879087 16774 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 16:44:49.879097 16774 kubeadm.go:310]
I0920 16:44:49.879178 16774 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 16:44:49.879289 16774 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 16:44:49.879380 16774 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 16:44:49.879392 16774 kubeadm.go:310]
I0920 16:44:49.879514 16774 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 16:44:49.879621 16774 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 16:44:49.879641 16774 kubeadm.go:310]
I0920 16:44:49.879756 16774 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ss9buj.0c6u12p1td4a48ak \
I0920 16:44:49.879883 16774 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:240c065d4f95c9bb5d28e0d1bbd6719e72d2976d0c827c563409b1a9ab5915cb \
I0920 16:44:49.879928 16774 kubeadm.go:310] --control-plane
I0920 16:44:49.879941 16774 kubeadm.go:310]
I0920 16:44:49.880015 16774 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 16:44:49.880021 16774 kubeadm.go:310]
I0920 16:44:49.880092 16774 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ss9buj.0c6u12p1td4a48ak \
I0920 16:44:49.880187 16774 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:240c065d4f95c9bb5d28e0d1bbd6719e72d2976d0c827c563409b1a9ab5915cb
I0920 16:44:49.881503 16774 kubeadm.go:310] W0920 16:44:41.144356 1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:44:49.881808 16774 kubeadm.go:310] W0920 16:44:41.144993 1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:44:49.882016 16774 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
I0920 16:44:49.882108 16774 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 16:44:49.882129 16774 cni.go:84] Creating CNI manager for ""
I0920 16:44:49.882144 16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:49.884169 16774 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 16:44:49.885691 16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0920 16:44:49.893901 16774 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 16:44:49.909805 16774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 16:44:49.909868 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:49.909882 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-205029 minikube.k8s.io/updated_at=2024_09_20T16_44_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-205029 minikube.k8s.io/primary=true
I0920 16:44:49.916704 16774 ops.go:34] apiserver oom_adj: -16
I0920 16:44:49.990468 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:50.491296 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:50.990755 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:51.491007 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:51.990868 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:52.490655 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:52.991377 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:53.491475 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:53.991454 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:54.491580 16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:54.552978 16774 kubeadm.go:1113] duration metric: took 4.643164069s to wait for elevateKubeSystemPrivileges
I0920 16:44:54.553011 16774 kubeadm.go:394] duration metric: took 13.550789888s to StartCluster
I0920 16:44:54.553028 16774 settings.go:142] acquiring lock: {Name:mk0bd30b070fa56866482d504f296479e9d1b0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:54.553128 16774 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19672-8616/kubeconfig
I0920 16:44:54.553544 16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/kubeconfig: {Name:mk17e3b05f62f29ee13b5427250b308800e65dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:54.553751 16774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 16:44:54.553747 16774 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 16:44:54.553771 16774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 16:44:54.553866 16774 addons.go:69] Setting volumesnapshots=true in profile "addons-205029"
I0920 16:44:54.553871 16774 addons.go:69] Setting gcp-auth=true in profile "addons-205029"
I0920 16:44:54.553873 16774 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-205029"
I0920 16:44:54.553885 16774 addons.go:234] Setting addon volumesnapshots=true in "addons-205029"
I0920 16:44:54.553884 16774 addons.go:69] Setting default-storageclass=true in profile "addons-205029"
I0920 16:44:54.553892 16774 mustload.go:65] Loading cluster: addons-205029
I0920 16:44:54.553888 16774 addons.go:69] Setting metrics-server=true in profile "addons-205029"
I0920 16:44:54.553900 16774 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-205029"
I0920 16:44:54.553903 16774 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-205029"
I0920 16:44:54.553914 16774 addons.go:234] Setting addon metrics-server=true in "addons-205029"
I0920 16:44:54.553914 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.553944 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.554090 16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:44:54.553839 16774 addons.go:69] Setting cloud-spanner=true in profile "addons-205029"
I0920 16:44:54.554174 16774 addons.go:234] Setting addon cloud-spanner=true in "addons-205029"
I0920 16:44:54.554203 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.554254 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554269 16774 addons.go:69] Setting storage-provisioner=true in profile "addons-205029"
I0920 16:44:54.554283 16774 addons.go:234] Setting addon storage-provisioner=true in "addons-205029"
I0920 16:44:54.554305 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.554326 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554411 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554464 16774 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-205029"
I0920 16:44:54.554485 16774 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-205029"
I0920 16:44:54.554506 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.554622 16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:44:54.554660 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554254 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554744 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554445 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.554926 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.555097 16774 addons.go:69] Setting inspektor-gadget=true in profile "addons-205029"
I0920 16:44:54.555127 16774 addons.go:234] Setting addon inspektor-gadget=true in "addons-205029"
I0920 16:44:54.555166 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.553851 16774 addons.go:69] Setting ingress=true in profile "addons-205029"
I0920 16:44:54.555487 16774 addons.go:234] Setting addon ingress=true in "addons-205029"
I0920 16:44:54.555546 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.555644 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.553851 16774 addons.go:69] Setting registry=true in profile "addons-205029"
I0920 16:44:54.556041 16774 addons.go:234] Setting addon registry=true in "addons-205029"
I0920 16:44:54.556074 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.556084 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.553858 16774 addons.go:69] Setting volcano=true in profile "addons-205029"
I0920 16:44:54.556200 16774 addons.go:234] Setting addon volcano=true in "addons-205029"
I0920 16:44:54.556256 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.553862 16774 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-205029"
I0920 16:44:54.556415 16774 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-205029"
I0920 16:44:54.556466 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.553848 16774 addons.go:69] Setting yakd=true in profile "addons-205029"
I0920 16:44:54.556622 16774 addons.go:234] Setting addon yakd=true in "addons-205029"
I0920 16:44:54.556650 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.557393 16774 out.go:177] * Verifying Kubernetes components...
I0920 16:44:54.559186 16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:54.553861 16774 addons.go:69] Setting ingress-dns=true in profile "addons-205029"
I0920 16:44:54.559363 16774 addons.go:234] Setting addon ingress-dns=true in "addons-205029"
I0920 16:44:54.559402 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.559886 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.600682 16774 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-205029"
I0920 16:44:54.600730 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.601198 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.603185 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.609037 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.609536 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.609823 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.615769 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.629928 16774 addons.go:234] Setting addon default-storageclass=true in "addons-205029"
I0920 16:44:54.629970 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:44:54.630369 16774 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 16:44:54.630398 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:44:54.632905 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 16:44:54.632935 16774 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 16:44:54.633003 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.630372 16774 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0920 16:44:54.636742 16774 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 16:44:54.630369 16774 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 16:44:54.639404 16774 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 16:44:54.639438 16774 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 16:44:54.639503 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.639822 16774 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 16:44:54.639839 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 16:44:54.639884 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.639822 16774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:44:54.639910 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 16:44:54.639955 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.664138 16774 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 16:44:54.665947 16774 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 16:44:54.665972 16774 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 16:44:54.666034 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.666262 16774 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 16:44:54.667785 16774 out.go:177] - Using image docker.io/busybox:stable
I0920 16:44:54.669125 16774 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 16:44:54.669240 16774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:44:54.669260 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 16:44:54.669314 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.671379 16774 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:44:54.671396 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 16:44:54.671447 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.672171 16774 out.go:177] - Using image docker.io/registry:2.8.3
I0920 16:44:54.673422 16774 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 16:44:54.674779 16774 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 16:44:54.674797 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 16:44:54.674850 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.687147 16774 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:44:54.687200 16774 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0920 16:44:54.689674 16774 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0920 16:44:54.689899 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.690098 16774 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 16:44:54.690113 16774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 16:44:54.690163 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.690323 16774 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:44:54.694932 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 16:44:54.694931 16774 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0920 16:44:54.696248 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 16:44:54.697074 16774 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0920 16:44:54.698339 16774 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0920 16:44:54.698353 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0920 16:44:54.698397 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.698630 16774 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 16:44:54.699000 16774 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0920 16:44:54.699017 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0920 16:44:54.699172 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.704934 16774 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 16:44:54.707745 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 16:44:54.708023 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.711160 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 16:44:54.712657 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 16:44:54.714010 16774 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 16:44:54.714396 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.715447 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 16:44:54.715469 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 16:44:54.715535 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.716387 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.725182 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.725514 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.728493 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.728917 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.734670 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.734725 16774 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 16:44:54.734847 16774 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0920 16:44:54.736133 16774 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 16:44:54.736156 16774 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 16:44:54.736216 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.736288 16774 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 16:44:54.736298 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0920 16:44:54.736337 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:44:54.739898 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.755575 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.762388 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:44:54.776018 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
W0920 16:44:54.778121 16774 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0920 16:44:54.778150 16774 retry.go:31] will retry after 329.660948ms: ssh: handshake failed: EOF
I0920 16:44:54.778755 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
W0920 16:44:54.845390 16774 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0920 16:44:54.845428 16774 retry.go:31] will retry after 287.554184ms: ssh: handshake failed: EOF
I0920 16:44:55.043674 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 16:44:55.055027 16774 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 16:44:55.055102 16774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 16:44:55.065870 16774 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 16:44:55.065898 16774 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 16:44:55.146472 16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 16:44:55.146511 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 16:44:55.166900 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 16:44:55.245267 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:44:55.247824 16774 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:44:55.247901 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 16:44:55.251568 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0920 16:44:55.253883 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:44:55.255478 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:44:55.258841 16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 16:44:55.258922 16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 16:44:55.267822 16774 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 16:44:55.267854 16774 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 16:44:55.350723 16774 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 16:44:55.350822 16774 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 16:44:55.444717 16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 16:44:55.444746 16774 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 16:44:55.555680 16774 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 16:44:55.555766 16774 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 16:44:55.561613 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 16:44:55.561689 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 16:44:55.655606 16774 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 16:44:55.655661 16774 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 16:44:55.744116 16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 16:44:55.744206 16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 16:44:55.845794 16774 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 16:44:55.845819 16774 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 16:44:55.950542 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0920 16:44:55.960105 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:44:56.043678 16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:44:56.043769 16774 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 16:44:56.146727 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 16:44:56.152381 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 16:44:56.152461 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 16:44:56.162863 16774 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 16:44:56.162935 16774 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 16:44:56.248081 16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 16:44:56.248165 16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 16:44:56.263618 16774 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:44:56.263707 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 16:44:56.348527 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:44:56.668094 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 16:44:56.668119 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 16:44:56.745254 16774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 16:44:56.745339 16774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 16:44:56.854039 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:44:56.964754 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 16:44:56.964785 16774 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 16:44:57.244950 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 16:44:57.245034 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 16:44:57.362038 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.318267333s)
I0920 16:44:57.362174 16774 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.307004418s)
I0920 16:44:57.362225 16774 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0920 16:44:57.363554 16774 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.308446195s)
I0920 16:44:57.363756 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196822695s)
I0920 16:44:57.364839 16774 node_ready.go:35] waiting up to 6m0s for node "addons-205029" to be "Ready" ...
I0920 16:44:57.449213 16774 node_ready.go:49] node "addons-205029" has status "Ready":"True"
I0920 16:44:57.449248 16774 node_ready.go:38] duration metric: took 84.345171ms for node "addons-205029" to be "Ready" ...
I0920 16:44:57.449260 16774 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:44:57.458128 16774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace to be "Ready" ...
I0920 16:44:57.458187 16774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 16:44:57.458317 16774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 16:44:57.551583 16774 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:44:57.551699 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 16:44:57.745830 16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 16:44:57.745908 16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 16:44:57.866052 16774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-205029" context rescaled to 1 replicas
I0920 16:44:57.943880 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:44:57.946121 16774 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 16:44:57.946144 16774 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 16:44:58.247147 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 16:44:58.247214 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 16:44:58.358845 16774 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:44:58.358878 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 16:44:58.747645 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 16:44:58.747677 16774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 16:44:58.950800 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:44:59.063415 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 16:44:59.063451 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 16:44:59.155782 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.910427243s)
I0920 16:44:59.547844 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
I0920 16:44:59.646089 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 16:44:59.646173 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 16:45:00.247595 16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:00.247876 16774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 16:45:00.745083 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:01.651758 16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 16:45:01.651862 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:45:01.677078 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:45:02.048960 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:02.344103 16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 16:45:02.565787 16774 addons.go:234] Setting addon gcp-auth=true in "addons-205029"
I0920 16:45:02.565847 16774 host.go:66] Checking if "addons-205029" exists ...
I0920 16:45:02.566364 16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
I0920 16:45:02.585067 16774 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 16:45:02.585116 16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
I0920 16:45:02.603463 16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
I0920 16:45:04.553643 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:06.346458 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.094783278s)
I0920 16:45:06.346646 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.092731805s)
I0920 16:45:06.346717 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.091168155s)
I0920 16:45:06.346878 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.396256957s)
I0920 16:45:06.346913 16774 addons.go:475] Verifying addon ingress=true in "addons-205029"
I0920 16:45:06.347287 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.387097277s)
I0920 16:45:06.347371 16774 addons.go:475] Verifying addon registry=true in "addons-205029"
I0920 16:45:06.347399 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.200581354s)
I0920 16:45:06.347519 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.998901085s)
I0920 16:45:06.347535 16774 addons.go:475] Verifying addon metrics-server=true in "addons-205029"
I0920 16:45:06.347583 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.493511884s)
I0920 16:45:06.347766 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.403844047s)
W0920 16:45:06.348818 16774 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:06.348846 16774 retry.go:31] will retry after 357.517696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:06.347848 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.397016446s)
I0920 16:45:06.349320 16774 out.go:177] * Verifying ingress addon...
I0920 16:45:06.349334 16774 out.go:177] * Verifying registry addon...
I0920 16:45:06.350499 16774 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-205029 service yakd-dashboard -n yakd-dashboard
I0920 16:45:06.352647 16774 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0920 16:45:06.353760 16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 16:45:06.360634 16774 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 16:45:06.360714 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:06.361206 16774 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0920 16:45:06.361232 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:06.707388 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:06.869584 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:06.869794 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:07.045928 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:07.357132 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:07.358678 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:07.863233 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:07.863730 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:08.054087 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.308897545s)
I0920 16:45:08.054130 16774 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-205029"
I0920 16:45:08.054161 16774 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.46906424s)
I0920 16:45:08.056202 16774 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:45:08.056216 16774 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 16:45:08.058227 16774 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 16:45:08.059155 16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:45:08.060062 16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 16:45:08.060084 16774 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 16:45:08.064438 16774 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:45:08.064470 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:08.145983 16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 16:45:08.146012 16774 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 16:45:08.169135 16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:08.169159 16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 16:45:08.251668 16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:08.357189 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:08.357755 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:08.565248 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:08.858283 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:08.858905 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:09.064976 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:09.145644 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.438186914s)
I0920 16:45:09.357661 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:09.358063 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:09.464792 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:09.564001 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:09.677182 16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.42547412s)
I0920 16:45:09.679810 16774 addons.go:475] Verifying addon gcp-auth=true in "addons-205029"
I0920 16:45:09.681561 16774 out.go:177] * Verifying gcp-auth addon...
I0920 16:45:09.684026 16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 16:45:09.744334 16774 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 16:45:09.857266 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:09.857540 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:09.964906 16774 pod_ready.go:98] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:44:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 16:44:57 +0000 UTC,FinishedAt:2024-09-20 16:45:08 +0000 UTC,ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495 Started:0xc0022abf00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d3740} {Name:kube-api-access-jc4d2 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc0020d3750}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0920 16:45:09.964930 16774 pod_ready.go:82] duration metric: took 12.50670797s for pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace to be "Ready" ...
E0920 16:45:09.964941 16774 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:44:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 16:44:57 +0000 UTC,FinishedAt:2024-09-20 16:45:08 +0000 UTC,ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495 Started:0xc0022abf00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d3740} {Name:kube-api-access-jc4d2 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020d3750}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0920 16:45:09.964951 16774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace to be "Ready" ...
I0920 16:45:10.063304 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:10.356733 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:10.356848 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:10.563777 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:10.856662 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:10.856665 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:11.064149 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:11.357048 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:11.357123 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:11.564732 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:11.856680 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:11.856831 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:11.970531 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:12.064509 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:12.357881 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:12.358788 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:12.564051 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:12.857346 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:12.858339 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:13.063942 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:13.356739 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:13.517937 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:13.562888 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:13.856664 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:13.856711 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:14.063265 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:14.356509 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:14.357078 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:14.471213 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:14.563937 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:14.856959 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:14.857212 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:15.063518 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:15.356394 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:15.356673 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:15.562928 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:15.856714 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:15.857004 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.064244 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:16.356955 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.358161 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:16.563838 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:16.857636 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.857903 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:16.970693 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:17.063464 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.356583 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:17.356820 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:17.563705 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.856472 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:17.856758 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.062585 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.356806 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:18.356868 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.563163 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.857541 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.858596 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:19.063440 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.356634 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:19.356819 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:19.470369 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:19.564760 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.856401 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:19.856701 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.063957 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.357484 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.357803 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:20.563889 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.857385 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.857752 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:21.062940 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.356983 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:21.357023 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.471566 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:21.565784 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.856837 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.857201 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:22.064427 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.356428 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.356626 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:22.563675 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.857345 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.857957 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:23.064043 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:23.357127 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:23.357292 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:23.563581 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:23.856217 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:23.856986 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:23.974217 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:24.063121 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.357632 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:24.358463 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:24.564260 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.857612 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:24.858618 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:25.063619 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.356581 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:25.356726 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:25.562929 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.856648 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:25.857164 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:26.064029 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.356548 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:26.356677 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:26.470019 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:26.563369 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.856686 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:26.856869 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:27.063307 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.356446 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:27.356673 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:27.563665 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.856663 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:27.856885 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:28.063787 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.356807 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:28.357526 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:28.563791 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.856777 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:28.857009 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:28.970845 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:29.063611 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.356884 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:29.357078 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:29.563704 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.856412 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:29.856589 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:30.063364 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.356655 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:30.356798 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:30.564172 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.857185 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:30.857809 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:30.971069 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:31.064040 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.356967 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:31.357114 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:31.563136 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.856914 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:31.857069 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:32.063465 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.357188 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:32.357241 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:32.564182 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.856910 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:32.857092 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:33.064520 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.356335 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:33.356753 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:33.471255 16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:33.563215 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.856488 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:33.857248 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:34.064280 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.356859 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:34.357362 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:34.563958 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.856934 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:34.857222 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:35.064443 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.356391 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:35.356420 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:35.564173 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.856514 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:35.856656 16774 kapi.go:107] duration metric: took 29.502896632s to wait for kubernetes.io/minikube-addons=registry ...
I0920 16:45:35.970452 16774 pod_ready.go:93] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:35.970481 16774 pod_ready.go:82] duration metric: took 26.005522531s for pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.970496 16774 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.976916 16774 pod_ready.go:93] pod "etcd-addons-205029" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:35.976940 16774 pod_ready.go:82] duration metric: took 6.435502ms for pod "etcd-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.976953 16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.982502 16774 pod_ready.go:93] pod "kube-apiserver-addons-205029" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:35.982523 16774 pod_ready.go:82] duration metric: took 5.563544ms for pod "kube-apiserver-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.982533 16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.987118 16774 pod_ready.go:93] pod "kube-controller-manager-addons-205029" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:35.987140 16774 pod_ready.go:82] duration metric: took 4.599853ms for pod "kube-controller-manager-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.987152 16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m6rvs" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.991494 16774 pod_ready.go:93] pod "kube-proxy-m6rvs" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:35.991520 16774 pod_ready.go:82] duration metric: took 4.359262ms for pod "kube-proxy-m6rvs" in "kube-system" namespace to be "Ready" ...
I0920 16:45:35.991532 16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:36.063857 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.357630 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:36.368906 16774 pod_ready.go:93] pod "kube-scheduler-addons-205029" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:36.368938 16774 pod_ready.go:82] duration metric: took 377.396539ms for pod "kube-scheduler-addons-205029" in "kube-system" namespace to be "Ready" ...
I0920 16:45:36.368976 16774 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace to be "Ready" ...
I0920 16:45:36.563592 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.768396 16774 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:36.768424 16774 pod_ready.go:82] duration metric: took 399.438014ms for pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace to be "Ready" ...
I0920 16:45:36.768432 16774 pod_ready.go:39] duration metric: took 39.319159915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:45:36.768452 16774 api_server.go:52] waiting for apiserver process to appear ...
I0920 16:45:36.768502 16774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:36.784595 16774 api_server.go:72] duration metric: took 42.230762976s to wait for apiserver process to appear ...
I0920 16:45:36.784617 16774 api_server.go:88] waiting for apiserver healthz status ...
I0920 16:45:36.784638 16774 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0920 16:45:36.789300 16774 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0920 16:45:36.790264 16774 api_server.go:141] control plane version: v1.31.1
I0920 16:45:36.790288 16774 api_server.go:131] duration metric: took 5.665428ms to wait for apiserver health ...
I0920 16:45:36.790297 16774 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 16:45:36.857027 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:36.975185 16774 system_pods.go:59] 17 kube-system pods found
I0920 16:45:36.975218 16774 system_pods.go:61] "coredns-7c65d6cfc9-zsdfb" [726c17a6-7f53-49e4-ac8a-783182889340] Running
I0920 16:45:36.975229 16774 system_pods.go:61] "csi-hostpath-attacher-0" [3c7a327a-8620-48ac-ab71-1dce4985efc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:36.975240 16774 system_pods.go:61] "csi-hostpath-resizer-0" [8ad61db7-1573-41a0-bdbf-4409341769e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:36.975250 16774 system_pods.go:61] "csi-hostpathplugin-f5rlb" [433d3846-18be-4200-81ee-9c1b69c03797] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:36.975257 16774 system_pods.go:61] "etcd-addons-205029" [da5fb10c-8086-498a-bda4-2f1cac80070e] Running
I0920 16:45:36.975264 16774 system_pods.go:61] "kube-apiserver-addons-205029" [33309b9f-3d85-48c0-b656-51de82848533] Running
I0920 16:45:36.975273 16774 system_pods.go:61] "kube-controller-manager-addons-205029" [1a0232fb-ffab-4e7a-88cf-c26f2c65aa24] Running
I0920 16:45:36.975281 16774 system_pods.go:61] "kube-ingress-dns-minikube" [cf2b54b5-fa63-42a9-a833-af0242b4cb46] Running
I0920 16:45:36.975290 16774 system_pods.go:61] "kube-proxy-m6rvs" [235e9e6f-4299-43f7-8b9e-8887ecb70cd5] Running
I0920 16:45:36.975295 16774 system_pods.go:61] "kube-scheduler-addons-205029" [7dd9cb76-e503-4e1e-a4c6-bcf31e76e886] Running
I0920 16:45:36.975307 16774 system_pods.go:61] "metrics-server-84c5f94fbc-44j97" [d4cc25a2-9517-4e7b-9fa5-57b6a061d910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:36.975314 16774 system_pods.go:61] "nvidia-device-plugin-daemonset-xpzd9" [caf9d40a-dff4-4e28-b6c7-d185e6e30b5a] Running
I0920 16:45:36.975323 16774 system_pods.go:61] "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
I0920 16:45:36.975328 16774 system_pods.go:61] "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
I0920 16:45:36.975341 16774 system_pods.go:61] "snapshot-controller-56fcc65765-l8spt" [2770a7ca-b59f-4393-a8b8-a0380a26fc3c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:36.975361 16774 system_pods.go:61] "snapshot-controller-56fcc65765-lzk5g" [2fa9904e-14d6-4369-8c43-740334b4055f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:36.975369 16774 system_pods.go:61] "storage-provisioner" [a34eadc8-0330-4959-afc1-2093e6fc6774] Running
I0920 16:45:36.975378 16774 system_pods.go:74] duration metric: took 185.075475ms to wait for pod list to return data ...
I0920 16:45:36.975389 16774 default_sa.go:34] waiting for default service account to be created ...
I0920 16:45:37.064100 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.168740 16774 default_sa.go:45] found service account: "default"
I0920 16:45:37.168768 16774 default_sa.go:55] duration metric: took 193.368649ms for default service account to be created ...
I0920 16:45:37.168779 16774 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 16:45:37.357286 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:37.375225 16774 system_pods.go:86] 17 kube-system pods found
I0920 16:45:37.375254 16774 system_pods.go:89] "coredns-7c65d6cfc9-zsdfb" [726c17a6-7f53-49e4-ac8a-783182889340] Running
I0920 16:45:37.375265 16774 system_pods.go:89] "csi-hostpath-attacher-0" [3c7a327a-8620-48ac-ab71-1dce4985efc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:37.375273 16774 system_pods.go:89] "csi-hostpath-resizer-0" [8ad61db7-1573-41a0-bdbf-4409341769e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:37.375284 16774 system_pods.go:89] "csi-hostpathplugin-f5rlb" [433d3846-18be-4200-81ee-9c1b69c03797] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:37.375290 16774 system_pods.go:89] "etcd-addons-205029" [da5fb10c-8086-498a-bda4-2f1cac80070e] Running
I0920 16:45:37.375297 16774 system_pods.go:89] "kube-apiserver-addons-205029" [33309b9f-3d85-48c0-b656-51de82848533] Running
I0920 16:45:37.375303 16774 system_pods.go:89] "kube-controller-manager-addons-205029" [1a0232fb-ffab-4e7a-88cf-c26f2c65aa24] Running
I0920 16:45:37.375316 16774 system_pods.go:89] "kube-ingress-dns-minikube" [cf2b54b5-fa63-42a9-a833-af0242b4cb46] Running
I0920 16:45:37.375322 16774 system_pods.go:89] "kube-proxy-m6rvs" [235e9e6f-4299-43f7-8b9e-8887ecb70cd5] Running
I0920 16:45:37.375327 16774 system_pods.go:89] "kube-scheduler-addons-205029" [7dd9cb76-e503-4e1e-a4c6-bcf31e76e886] Running
I0920 16:45:37.375334 16774 system_pods.go:89] "metrics-server-84c5f94fbc-44j97" [d4cc25a2-9517-4e7b-9fa5-57b6a061d910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:37.375347 16774 system_pods.go:89] "nvidia-device-plugin-daemonset-xpzd9" [caf9d40a-dff4-4e28-b6c7-d185e6e30b5a] Running
I0920 16:45:37.375351 16774 system_pods.go:89] "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
I0920 16:45:37.375354 16774 system_pods.go:89] "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
I0920 16:45:37.375360 16774 system_pods.go:89] "snapshot-controller-56fcc65765-l8spt" [2770a7ca-b59f-4393-a8b8-a0380a26fc3c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:37.375368 16774 system_pods.go:89] "snapshot-controller-56fcc65765-lzk5g" [2fa9904e-14d6-4369-8c43-740334b4055f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:37.375372 16774 system_pods.go:89] "storage-provisioner" [a34eadc8-0330-4959-afc1-2093e6fc6774] Running
I0920 16:45:37.375379 16774 system_pods.go:126] duration metric: took 206.594432ms to wait for k8s-apps to be running ...
I0920 16:45:37.375389 16774 system_svc.go:44] waiting for kubelet service to be running ....
I0920 16:45:37.375441 16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0920 16:45:37.389864 16774 system_svc.go:56] duration metric: took 14.467151ms WaitForService to wait for kubelet
I0920 16:45:37.389896 16774 kubeadm.go:582] duration metric: took 42.836065711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:45:37.389918 16774 node_conditions.go:102] verifying NodePressure condition ...
I0920 16:45:37.564272 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.569494 16774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0920 16:45:37.569523 16774 node_conditions.go:123] node cpu capacity is 8
I0920 16:45:37.569540 16774 node_conditions.go:105] duration metric: took 179.615518ms to run NodePressure ...
I0920 16:45:37.569554 16774 start.go:241] waiting for startup goroutines ...
I0920 16:45:37.856861 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:38.064450 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.356139 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:38.563871 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.857137 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:39.064868 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.356944 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:39.564081 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.856834 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:40.063985 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.360725 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:40.563516 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.857175 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:41.064245 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.355868 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:41.563484 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.857264 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:42.063933 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:42.357349 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:42.563534 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:42.857646 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:43.064229 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:43.356856 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:43.563921 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:43.857009 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:44.063159 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:44.356843 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:44.563236 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:44.857659 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:45.064360 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:45.357196 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:45.563704 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:45.857110 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:46.063314 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:46.356126 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:46.562909 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:46.857170 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:47.064229 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:47.357575 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:47.563900 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:47.856747 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:48.064600 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:48.357248 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:48.565106 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:48.857239 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:49.063572 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:49.357361 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:49.563239 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:49.857557 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:50.064189 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:50.357046 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:50.563470 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:50.856233 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:51.064325 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:51.356631 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:51.563585 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:51.857153 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:52.064489 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:52.357283 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:52.602223 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:52.856882 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:53.063295 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:53.356609 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:53.563057 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:53.856854 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:54.063933 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:54.356866 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:54.563271 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:54.856922 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:55.063761 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:55.432327 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:55.564284 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:55.856546 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:56.064114 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:56.356883 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:56.563642 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:56.857469 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:57.064693 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:57.357560 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:57.564010 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:57.856660 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:58.064180 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:58.356652 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:58.564409 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:58.856794 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:59.064278 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:59.356861 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:59.564396 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:59.857220 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:00.063551 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:00.355824 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:00.564168 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:00.856946 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:01.064190 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:01.357506 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:01.563279 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:01.857086 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:02.063556 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:02.357325 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:02.564442 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:02.856912 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:03.064071 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:03.356081 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:03.563801 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:03.857188 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:04.063652 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:04.357421 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:04.563533 16774 kapi.go:107] duration metric: took 56.504381183s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 16:46:04.856764 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:05.356543 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:05.856913 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:06.357045 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:06.858653 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:07.356456 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:07.856432 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:08.356846 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:08.856690 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:09.357334 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:09.857425 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:10.357427 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:10.857270 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:11.356682 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:11.857025 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:12.357053 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:12.857543 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:13.427837 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:13.857348 16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:14.356559 16774 kapi.go:107] duration metric: took 1m8.003907962s to wait for app.kubernetes.io/name=ingress-nginx ...
I0920 16:46:33.188098 16774 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 16:46:33.188121 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:33.687533 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:34.187496 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:34.687770 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:35.187960 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:35.686566 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:36.187700 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:36.687760 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:37.187938 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:37.686881 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:38.187951 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:38.687806 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:39.187937 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:39.686583 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:40.187144 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:40.687153 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:41.187121 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:41.686752 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:42.188749 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:42.686493 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:43.187703 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:43.687582 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:44.187423 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:44.687257 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:45.188066 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:45.687827 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:46.187552 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:46.687074 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:47.187186 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:47.686780 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:48.187783 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:48.686597 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:49.188086 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:49.687089 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:50.186922 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:50.688773 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:51.187532 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:51.687727 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:52.188074 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:52.687066 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:53.187303 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:53.687233 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:54.187109 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:54.687205 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:55.187211 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:55.687160 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:56.186859 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:56.687710 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:57.188023 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:57.686931 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:58.186770 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:58.687853 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:59.187816 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:59.687299 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:00.187825 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:00.695714 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:01.186802 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:01.687963 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:02.188150 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:02.687378 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:03.187394 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:03.689232 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:04.187384 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:04.687313 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:05.187578 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:05.687359 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:06.187163 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:06.687111 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:07.188080 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:07.687987 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:08.186887 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:08.687025 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:09.186720 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:09.687638 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:10.188103 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:10.688066 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:11.186930 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:11.687577 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:12.188172 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:12.687570 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:13.187259 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:13.687266 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:14.187289 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:14.687099 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:15.187032 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:15.687033 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:16.186824 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:16.687722 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:17.188038 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:17.687803 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:18.187752 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:18.686889 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:19.187660 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:19.687175 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:20.186890 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:20.687981 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:21.187702 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:21.687810 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:22.187884 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:22.688201 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:23.187041 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:23.687248 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:24.186939 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:24.686956 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:25.187944 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:25.687856 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:26.187846 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:26.687488 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:27.187845 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:27.687538 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:28.187500 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:28.687612 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:29.187430 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:29.686537 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:30.187861 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:30.687371 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:31.187045 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:31.686941 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:32.187333 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:32.686783 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:33.187789 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:33.686770 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:34.187699 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:34.687643 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:35.187599 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:35.687704 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:36.187648 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:36.687460 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:37.187563 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:37.687288 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:38.187131 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:38.687439 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:39.187058 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:39.687594 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:40.187849 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:40.686756 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:41.187849 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:41.687848 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:42.188377 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:42.687446 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:43.188308 16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:43.687617 16774 kapi.go:107] duration metric: took 2m34.003590742s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 16:47:43.689263 16774 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-205029 cluster.
I0920 16:47:43.691115 16774 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 16:47:43.692517 16774 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 16:47:43.694024 16774 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner-rancher, volcano, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0920 16:47:43.695362 16774 addons.go:510] duration metric: took 2m49.141592586s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner-rancher volcano storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0920 16:47:43.695416 16774 start.go:246] waiting for cluster config update ...
I0920 16:47:43.695443 16774 start.go:255] writing updated cluster config ...
I0920 16:47:43.695724 16774 ssh_runner.go:195] Run: rm -f paused
I0920 16:47:43.744662 16774 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 16:47:43.746645 16774 out.go:177] * Done! kubectl is now configured to use "addons-205029" cluster and "default" namespace by default
==> Docker <==
Sep 20 16:57:15 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:15Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.200217406Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8b41ac1c172cbd6d1e887c082d7b84137325526cb5404a5ff4e2c9b46ec92693 spanID=5331617768338407 traceID=12f7e2a72b18a0df9f11c1c0b50664aa
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.221749663Z" level=info msg="ignoring event" container=8b41ac1c172cbd6d1e887c082d7b84137325526cb5404a5ff4e2c9b46ec92693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.341749999Z" level=info msg="ignoring event" container=fcf8b6c5426f938727b077259b0b93dafcaefc6f0ff3c3dc7f40c95b197a696b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.756357771Z" level=info msg="ignoring event" container=9fce6a32f2f45be74aafce36b1ed3ee8caa42aa0595637a0f016f44bd54ef68a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.757278420Z" level=info msg="ignoring event" container=ed3f6bf61f9d7d7c536be1e57d9af03b1d7d5b6560f1a245ab2fc9ae52ff778f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.899339249Z" level=info msg="ignoring event" container=792ba9be9bd8fd820a90f1e42908a37e550fbde59dc1aad493e1393f62dc08d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.941417965Z" level=info msg="ignoring event" container=956fbc6feda09d3046bc0a2d5ab69bc273d5ed15ecf4a9e887ff9a57ef020d28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:20 addons-205029 dockerd[1338]: time="2024-09-20T16:57:20.280887678Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=02bccbf18e05720d traceID=796d04e38cd35e776d24af1aee8a7830
Sep 20 16:57:20 addons-205029 dockerd[1338]: time="2024-09-20T16:57:20.283232320Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=02bccbf18e05720d traceID=796d04e38cd35e776d24af1aee8a7830
Sep 20 16:57:23 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/988c0c3c01b8dec5c02d6845445cc43712d5e7a30eb38fc37b7a1c4f228d320c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 20 16:57:24 addons-205029 dockerd[1338]: time="2024-09-20T16:57:24.204344276Z" level=info msg="ignoring event" container=44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:24 addons-205029 dockerd[1338]: time="2024-09-20T16:57:24.250644894Z" level=info msg="ignoring event" container=814032f2e100c2accc20910994da34b116ee63d12f61d0e1c1cd5333a1148fe4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:25 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:25Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.170705177Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3 spanID=88fd4a1ca725db4a traceID=94ce11c16daa9b2673671b0abb87f9b1
Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.224616834Z" level=info msg="ignoring event" container=b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.362563363Z" level=info msg="ignoring event" container=2057642ad0f9c93fbbbb3e9da32f2d92a0c23179aaa864519f6d63e2ead0faa5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:29 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:29Z" level=error msg="error getting RW layer size for container ID '44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599': Error response from daemon: No such container: 44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599"
Sep 20 16:57:29 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599'"
Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.147088821Z" level=info msg="ignoring event" container=9d2da6dde0ef36d874416b5e56c01a77648d285675a9f153cef856d00064e58f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.656191639Z" level=info msg="ignoring event" container=a0f94f0a24718148dc0489393e7aea5377a510b08ca21b5fa848daf98bede421 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.727755489Z" level=info msg="ignoring event" container=c20060aa3ed13af6cf27794ae93751298dedebb43f4e90faca7daea0cd145e79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.807510088Z" level=info msg="ignoring event" container=e6a7d18e663a25a730d4f6a1fd3b40253be8000145625dff5a67c31b3ff8508c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:57:37 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-r58ln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.897505870Z" level=info msg="ignoring event" container=c04164513365621b2371cadffa8cc82b903bfb8592fa006de4896f508ce02c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
5081cf10fe14c kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 13 seconds ago Running hello-world-app 0 988c0c3c01b8d hello-world-app-55bf9c44b4-tmpmp
1b7718ec98f92 nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf 23 seconds ago Running nginx 0 37186b004ad20 nginx
897b1b4e3fa07 a416a98b71e22 49 seconds ago Exited helper-pod 0 20b37dd65d3f4 helper-pod-delete-pvc-d6bd4afe-8bba-4f86-86d7-a230517a8194
bc4d995d11bcc gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 9a3c16f48fbd0 gcp-auth-89d5ffd79-p7btr
5799d09540395 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 9b42c9a68f759 ingress-nginx-admission-patch-rpgr8
1cc91738417ed registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 20d86356b5231 ingress-nginx-admission-create-fht9m
c20060aa3ed13 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 12 minutes ago Exited registry-proxy 0 c041645133656 registry-proxy-r58ln
a0f94f0a24718 registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 12 minutes ago Exited registry 0 e6a7d18e663a2 registry-66c9cd494c-2sstq
d38f69a74eb18 6e38f40d628db 12 minutes ago Running storage-provisioner 0 8cb88a4f31c25 storage-provisioner
e22b2be76b742 c69fa2e9cbf5f 12 minutes ago Running coredns 0 0c3648605f747 coredns-7c65d6cfc9-zsdfb
82e7a7b780258 60c005f310ff3 12 minutes ago Running kube-proxy 0 09cc583370936 kube-proxy-m6rvs
3556b0f5ce7c0 2e96e5913fc06 12 minutes ago Running etcd 0 7dbf2978ee818 etcd-addons-205029
613a1b8e140bb 9aa1fad941575 12 minutes ago Running kube-scheduler 0 605264de80a0e kube-scheduler-addons-205029
f35872d5577c2 6bab7719df100 12 minutes ago Running kube-apiserver 0 25fcc64785e28 kube-apiserver-addons-205029
7215c72a915c2 175ffd71cce3d 12 minutes ago Running kube-controller-manager 0 52f087565180b kube-controller-manager-addons-205029
==> coredns [e22b2be76b74] <==
[INFO] 10.244.0.8:40573 - 6500 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008079s
[INFO] 10.244.0.8:38463 - 40354 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072848s
[INFO] 10.244.0.8:38463 - 65440 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108458s
[INFO] 10.244.0.8:48618 - 36252 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004903327s
[INFO] 10.244.0.8:48618 - 59539 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00487924s
[INFO] 10.244.0.8:49813 - 17578 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004355539s
[INFO] 10.244.0.8:49813 - 11438 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005364924s
[INFO] 10.244.0.8:40461 - 26041 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004535564s
[INFO] 10.244.0.8:40461 - 42677 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00616796s
[INFO] 10.244.0.8:60468 - 35914 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083808s
[INFO] 10.244.0.8:60468 - 26950 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129051s
[INFO] 10.244.0.25:40388 - 37440 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027489s
[INFO] 10.244.0.25:46048 - 5263 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016114s
[INFO] 10.244.0.25:52248 - 1253 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014577s
[INFO] 10.244.0.25:47006 - 62795 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020073s
[INFO] 10.244.0.25:57177 - 26279 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129635s
[INFO] 10.244.0.25:34269 - 32671 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017305s
[INFO] 10.244.0.25:34520 - 3129 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008757732s
[INFO] 10.244.0.25:43942 - 25288 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008829255s
[INFO] 10.244.0.25:35893 - 48985 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007424114s
[INFO] 10.244.0.25:33589 - 27688 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007640202s
[INFO] 10.244.0.25:52108 - 25128 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006747925s
[INFO] 10.244.0.25:32878 - 20082 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007507541s
[INFO] 10.244.0.25:36973 - 13363 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.003010406s
[INFO] 10.244.0.25:37503 - 32130 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.004152006s
==> describe nodes <==
Name: addons-205029
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-205029
kubernetes.io/os=linux
minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
minikube.k8s.io/name=addons-205029
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T16_44_49_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-205029
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 16:44:46 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-205029
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 16:57:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 16:57:25 +0000 Fri, 20 Sep 2024 16:44:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 16:57:25 +0000 Fri, 20 Sep 2024 16:44:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 16:57:25 +0000 Fri, 20 Sep 2024 16:44:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 16:57:25 +0000 Fri, 20 Sep 2024 16:44:47 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-205029
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
System Info:
Machine ID: 7636be5e00d74b5e91ccb5e8ab2cd570
System UUID: f5c8962a-51ca-4e02-8bba-f9cc61977477
Boot ID: 1090cbe7-7e52-40cc-b00d-227cb699fd1e
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.0
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default hello-world-app-55bf9c44b4-tmpmp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27s
gcp-auth gcp-auth-89d5ffd79-p7btr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system coredns-7c65d6cfc9-zsdfb 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 12m
kube-system etcd-addons-205029 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 12m
kube-system kube-apiserver-addons-205029 250m (3%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-205029 200m (2%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-m6rvs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-205029 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%) 0 (0%)
memory 170Mi (0%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-205029 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-205029 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-205029 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-205029 event: Registered Node addons-205029 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d9 28 2e 82 1c 08 06
[ +2.232064] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 31 f4 87 1d 47 08 06
[ +2.880027] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 6f 7b d0 48 22 08 06
[Sep20 16:46] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 60 44 3e a5 82 08 06
[ +0.065178] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 45 c1 15 3a ff 08 06
[ +0.014735] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 44 75 61 3e 61 08 06
[ +7.830531] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 c5 96 fc 06 0d 08 06
[ +3.891799] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 fd a0 0f 0c 90 08 06
[Sep20 16:47] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 67 c5 70 47 a4 08 06
[ +0.000000] ll header: 00000000: ff ff ff ff ff ff 8e 07 02 01 1c ad 08 06
[ +28.840848] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 9c 63 ec fd b8 08 06
[ +0.000460] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a b0 dc e0 b7 f6 08 06
[Sep20 16:57] IPv4: martian source 10.244.0.35 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 fd a0 0f 0c 90 08 06
==> etcd [3556b0f5ce7c] <==
{"level":"info","ts":"2024-09-20T16:44:45.270865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-20T16:44:45.270901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-20T16:44:45.270913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-20T16:44:45.270927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-20T16:44:45.272001Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-205029 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-20T16:44:45.272071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T16:44:45.272178Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:45.272256Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-20T16:44:45.272297Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-20T16:44:45.272320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T16:44:45.273048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:45.273118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:45.273139Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:45.273385Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T16:44:45.273500Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T16:44:45.274299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-20T16:44:45.274516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-20T16:45:13.515320Z","caller":"traceutil/trace.go:171","msg":"trace[23916195] linearizableReadLoop","detail":"{readStateIndex:961; appliedIndex:960; }","duration":"160.089812ms","start":"2024-09-20T16:45:13.355210Z","end":"2024-09-20T16:45:13.515300Z","steps":["trace[23916195] 'read index received' (duration: 96.016148ms)","trace[23916195] 'applied index is now lower than readState.Index' (duration: 64.072732ms)"],"step_count":2}
{"level":"warn","ts":"2024-09-20T16:45:13.515459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.312532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:45:13.515529Z","caller":"traceutil/trace.go:171","msg":"trace[746983478] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:940; }","duration":"160.3997ms","start":"2024-09-20T16:45:13.355118Z","end":"2024-09-20T16:45:13.515518Z","steps":["trace[746983478] 'agreement among raft nodes before linearized reading' (duration: 160.289271ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:45:13.515539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.107782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f701923977c502\" ","response":"range_response_count:1 size:927"}
{"level":"info","ts":"2024-09-20T16:45:13.515567Z","caller":"traceutil/trace.go:171","msg":"trace[1285220694] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f701923977c502; range_end:; response_count:1; response_revision:940; }","duration":"112.137698ms","start":"2024-09-20T16:45:13.403419Z","end":"2024-09-20T16:45:13.515557Z","steps":["trace[1285220694] 'agreement among raft nodes before linearized reading' (duration: 112.028858ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:54:45.382856Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1847}
{"level":"info","ts":"2024-09-20T16:54:45.405678Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1847,"took":"22.278866ms","hash":435374599,"current-db-size-bytes":8638464,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4804608,"current-db-size-in-use":"4.8 MB"}
{"level":"info","ts":"2024-09-20T16:54:45.405721Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435374599,"revision":1847,"compact-revision":-1}
==> gcp-auth [bc4d995d11bc] <==
2024/09/20 16:48:24 Ready to write response ...
2024/09/20 16:48:24 Ready to marshal response ...
2024/09/20 16:48:24 Ready to write response ...
2024/09/20 16:56:34 Ready to marshal response ...
2024/09/20 16:56:34 Ready to write response ...
2024/09/20 16:56:37 Ready to marshal response ...
2024/09/20 16:56:37 Ready to write response ...
2024/09/20 16:56:37 Ready to marshal response ...
2024/09/20 16:56:37 Ready to write response ...
2024/09/20 16:56:37 Ready to marshal response ...
2024/09/20 16:56:37 Ready to write response ...
2024/09/20 16:56:44 Ready to marshal response ...
2024/09/20 16:56:44 Ready to write response ...
2024/09/20 16:56:44 Ready to marshal response ...
2024/09/20 16:56:44 Ready to write response ...
2024/09/20 16:56:44 Ready to marshal response ...
2024/09/20 16:56:44 Ready to write response ...
2024/09/20 16:56:48 Ready to marshal response ...
2024/09/20 16:56:48 Ready to write response ...
2024/09/20 16:57:03 Ready to marshal response ...
2024/09/20 16:57:03 Ready to write response ...
2024/09/20 16:57:11 Ready to marshal response ...
2024/09/20 16:57:11 Ready to write response ...
2024/09/20 16:57:23 Ready to marshal response ...
2024/09/20 16:57:23 Ready to write response ...
==> kernel <==
16:57:38 up 40 min, 0 users, load average: 0.34, 0.44, 0.49
Linux addons-205029 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [f35872d5577c] <==
W0920 16:48:16.444571 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0920 16:48:16.467947 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0920 16:48:16.552976 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0920 16:48:16.848687 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0920 16:48:17.207394 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0920 16:56:42.788647 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0920 16:56:43.277083 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0920 16:56:44.120076 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.91.39"}
E0920 16:57:04.395270 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I0920 16:57:06.374840 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0920 16:57:07.389598 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0920 16:57:11.838527 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0920 16:57:12.053349 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.158.169"}
I0920 16:57:19.358106 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0920 16:57:19.358163 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0920 16:57:19.380116 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0920 16:57:19.380170 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0920 16:57:19.451567 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0920 16:57:19.451620 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0920 16:57:19.553167 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0920 16:57:19.553213 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0920 16:57:20.380456 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0920 16:57:20.554037 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0920 16:57:20.562322 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0920 16:57:23.561300 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.153.45"}
==> kube-controller-manager [7215c72a915c] <==
I0920 16:57:25.151411 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
I0920 16:57:25.188001 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-205029"
W0920 16:57:26.391160 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:26.391196 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:57:26.699728 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.21819ms"
I0920 16:57:26.700134 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.35µs"
W0920 16:57:27.452057 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:27.452096 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:27.772024 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:27.772061 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:28.300181 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:28.300218 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:30.514674 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:30.514720 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:32.906554 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:32.906596 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:34.633226 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:34.633264 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:57:35.215649 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
I0920 16:57:36.841342 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
I0920 16:57:37.591817 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="19.096µs"
W0920 16:57:37.644133 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:37.644183 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:57:37.834094 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:57:37.834140 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [82e7a7b78025] <==
I0920 16:44:57.451996 1 server_linux.go:66] "Using iptables proxy"
I0920 16:44:58.053162 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0920 16:44:58.053279 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 16:44:58.260214 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0920 16:44:58.260274 1 server_linux.go:169] "Using iptables Proxier"
I0920 16:44:58.360455 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 16:44:58.360968 1 server.go:483] "Version info" version="v1.31.1"
I0920 16:44:58.360994 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 16:44:58.363088 1 config.go:199] "Starting service config controller"
I0920 16:44:58.363121 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 16:44:58.363158 1 config.go:105] "Starting endpoint slice config controller"
I0920 16:44:58.363165 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 16:44:58.363745 1 config.go:328] "Starting node config controller"
I0920 16:44:58.363754 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 16:44:58.463732 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0920 16:44:58.463787 1 shared_informer.go:320] Caches are synced for node config
I0920 16:44:58.464383 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [613a1b8e140b] <==
E0920 16:44:46.864114 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0920 16:44:46.864654 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:46.864693 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 16:44:46.864711 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:46.865071 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 16:44:46.865099 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.676557 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0920 16:44:47.676596 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.700046 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 16:44:47.700082 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.779872 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 16:44:47.779918 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.786207 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0920 16:44:47.786261 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.859057 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 16:44:47.859105 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0920 16:44:47.875247 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:47.875287 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.883636 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0920 16:44:47.883675 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:47.884304 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0920 16:44:47.884342 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:48.036828 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 16:44:48.036864 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0920 16:44:51.060280 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.596820 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x67ds\" (UniqueName: \"kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds\") pod \"27679691-b05b-4349-adcc-503ae9858cbb\" (UID: \"27679691-b05b-4349-adcc-503ae9858cbb\") "
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.596890 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert\") pod \"27679691-b05b-4349-adcc-503ae9858cbb\" (UID: \"27679691-b05b-4349-adcc-503ae9858cbb\") "
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.598817 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "27679691-b05b-4349-adcc-503ae9858cbb" (UID: "27679691-b05b-4349-adcc-503ae9858cbb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.598991 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds" (OuterVolumeSpecName: "kube-api-access-x67ds") pod "27679691-b05b-4349-adcc-503ae9858cbb" (UID: "27679691-b05b-4349-adcc-503ae9858cbb"). InnerVolumeSpecName "kube-api-access-x67ds". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.698059 2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x67ds\" (UniqueName: \"kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds\") on node \"addons-205029\" DevicePath \"\""
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.698098 2429 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert\") on node \"addons-205029\" DevicePath \"\""
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.711772 2429 scope.go:117] "RemoveContainer" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.726078 2429 scope.go:117] "RemoveContainer" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
Sep 20 16:57:28 addons-205029 kubelet[2429]: E0920 16:57:28.726802 2429 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.726841 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"} err="failed to get container status \"b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3\": rpc error: code = Unknown desc = Error response from daemon: No such container: b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
Sep 20 16:57:29 addons-205029 kubelet[2429]: I0920 16:57:29.170060 2429 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27679691-b05b-4349-adcc-503ae9858cbb" path="/var/lib/kubelet/pods/27679691-b05b-4349-adcc-503ae9858cbb/volumes"
Sep 20 16:57:31 addons-205029 kubelet[2429]: E0920 16:57:31.168168 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="28df48e5-8914-4ad7-9aa6-f963fe3d9246"
Sep 20 16:57:33 addons-205029 kubelet[2429]: E0920 16:57:33.165014 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8c28c68-13a2-465a-8862-d35552e16a2d"
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.350307 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb6pp\" (UniqueName: \"kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp\") pod \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\" (UID: \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\") "
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.350375 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds\") pod \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\" (UID: \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\") "
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.351075 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "28df48e5-8914-4ad7-9aa6-f963fe3d9246" (UID: "28df48e5-8914-4ad7-9aa6-f963fe3d9246"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.352964 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp" (OuterVolumeSpecName: "kube-api-access-nb6pp") pod "28df48e5-8914-4ad7-9aa6-f963fe3d9246" (UID: "28df48e5-8914-4ad7-9aa6-f963fe3d9246"). InnerVolumeSpecName "kube-api-access-nb6pp". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.451162 2429 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds\") on node \"addons-205029\" DevicePath \"\""
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.451193 2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nb6pp\" (UniqueName: \"kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp\") on node \"addons-205029\" DevicePath \"\""
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.954476 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5w5t\" (UniqueName: \"kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t\") pod \"67cce838-d446-44f8-90cb-4b7c286fcfcb\" (UID: \"67cce838-d446-44f8-90cb-4b7c286fcfcb\") "
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.954535 2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khs5m\" (UniqueName: \"kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m\") pod \"243fbbcd-f60b-492a-ab03-a7425f4bce3b\" (UID: \"243fbbcd-f60b-492a-ab03-a7425f4bce3b\") "
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.957569 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t" (OuterVolumeSpecName: "kube-api-access-r5w5t") pod "67cce838-d446-44f8-90cb-4b7c286fcfcb" (UID: "67cce838-d446-44f8-90cb-4b7c286fcfcb"). InnerVolumeSpecName "kube-api-access-r5w5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.957737 2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m" (OuterVolumeSpecName: "kube-api-access-khs5m") pod "243fbbcd-f60b-492a-ab03-a7425f4bce3b" (UID: "243fbbcd-f60b-492a-ab03-a7425f4bce3b"). InnerVolumeSpecName "kube-api-access-khs5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:57:38 addons-205029 kubelet[2429]: I0920 16:57:38.055062 2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r5w5t\" (UniqueName: \"kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t\") on node \"addons-205029\" DevicePath \"\""
Sep 20 16:57:38 addons-205029 kubelet[2429]: I0920 16:57:38.055100 2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-khs5m\" (UniqueName: \"kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m\") on node \"addons-205029\" DevicePath \"\""
==> storage-provisioner [d38f69a74eb1] <==
I0920 16:45:02.364685 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 16:45:02.452457 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 16:45:02.452594 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 16:45:02.545268 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 16:45:02.545521 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2!
I0920 16:45:02.546684 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25331077-7087-4405-a476-a7c45133fe38", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2 became leader
I0920 16:45:02.646435 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-205029 -n addons-205029
helpers_test.go:261: (dbg) Run: kubectl --context addons-205029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-205029 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-205029 describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-205029/192.168.49.2
Start Time: Fri, 20 Sep 2024 16:48:24 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75jnd (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-75jnd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m15s default-scheduler Successfully assigned default/busybox to addons-205029
Warning Failed 7m56s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal Pulling 7m44s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m44s (x4 over 9m14s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m44s (x4 over 9m14s) kubelet Error: ErrImagePull
Normal BackOff 4m7s (x22 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.50s)