=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.782635ms
I0918 19:50:36.455832 7565 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
I0918 19:50:36.463878 7565 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:50:36.463997 7565 kapi.go:107] duration metric: took 11.219991ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004205442s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004907319s
addons_test.go:342: (dbg) Run: kubectl --context addons-923322 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.129814598s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-arm64 -p addons-923322 ip
2024/09/18 19:51:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-arm64 -p addons-923322 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable registry --alsologtostderr -v=1: (1.179674417s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-923322
helpers_test.go:235: (dbg) docker inspect addons-923322:
-- stdout --
[
{
"Id": "b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679",
"Created": "2024-09-18T19:38:35.634748954Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8813,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-09-18T19:38:35.805208547Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
"ResolvConfPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/hostname",
"HostsPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/hosts",
"LogPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679-json.log",
"Name": "/addons-923322",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-923322:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "addons-923322",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3-init/diff:/var/lib/docker/overlay2/2d5f4db6bef4f73456b3d6729836bc99a064b2dff1ec273e613fe21fbf6cf84d/diff",
"MergedDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/merged",
"UpperDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/diff",
"WorkDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-923322",
"Source": "/var/lib/docker/volumes/addons-923322/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-923322",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-923322",
"name.minikube.sigs.k8s.io": "addons-923322",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c5c1622f5e172c825854ba80ad33abf3a0c4099418ab8a0bcc30e9f90fbcb52d",
"SandboxKey": "/var/run/docker/netns/c5c1622f5e17",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-923322": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "07fa96b9e48eadd9fc9febbf6a977a0a660ba2cb85d425369ac66bc0a9c06077",
"EndpointID": "b6c0d38c2dcfb3232290e66faef2473565391a3c14d0c37e67380fcbcf4cf7e8",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-923322",
"b38fecd59f11"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p addons-923322 -n addons-923322
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p addons-923322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 logs -n 25: (1.766465621s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | |
| | -p download-only-843008 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| delete | -p download-only-843008 | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| start | -o=json --download-only | download-only-593891 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| | -p download-only-593891 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| delete | -p download-only-593891 | download-only-593891 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| delete | -p download-only-843008 | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| delete | -p download-only-593891 | download-only-593891 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| start | --download-only -p | download-docker-404631 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| | download-docker-404631 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-404631 | download-docker-404631 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| start | --download-only -p | binary-mirror-976038 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| | binary-mirror-976038 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:41665 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-976038 | binary-mirror-976038 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| addons | enable dashboard -p | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| | addons-923322 | | | | | |
| addons | disable dashboard -p | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| | addons-923322 | | | | | |
| start | -p addons-923322 --wait=true | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | addons-923322 addons disable | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:42 UTC | 18 Sep 24 19:42 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-923322 addons | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-923322 addons | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-923322 addons | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | addons-923322 | | | | | |
| ssh | addons-923322 ssh curl -s | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-923322 ip | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| addons | addons-923322 addons disable | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-923322 ip | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
| addons | addons-923322 addons disable | addons-923322 | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/18 19:38:11
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0918 19:38:11.208338 8317 out.go:345] Setting OutFile to fd 1 ...
I0918 19:38:11.208497 8317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:38:11.208509 8317 out.go:358] Setting ErrFile to fd 2...
I0918 19:38:11.208514 8317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:38:11.208759 8317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:38:11.209222 8317 out.go:352] Setting JSON to false
I0918 19:38:11.209948 8317 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1239,"bootTime":1726687053,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0918 19:38:11.210017 8317 start.go:139] virtualization:
I0918 19:38:11.211668 8317 out.go:177] * [addons-923322] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I0918 19:38:11.213230 8317 out.go:177] - MINIKUBE_LOCATION=19667
I0918 19:38:11.213404 8317 notify.go:220] Checking for updates...
I0918 19:38:11.215898 8317 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0918 19:38:11.217315 8317 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
I0918 19:38:11.219026 8317 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
I0918 19:38:11.220367 8317 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0918 19:38:11.221558 8317 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0918 19:38:11.223042 8317 driver.go:394] Setting default libvirt URI to qemu:///system
I0918 19:38:11.245017 8317 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
I0918 19:38:11.245149 8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0918 19:38:11.307127 8317 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:11.297373101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0918 19:38:11.307295 8317 docker.go:318] overlay module found
I0918 19:38:11.308662 8317 out.go:177] * Using the docker driver based on user configuration
I0918 19:38:11.309757 8317 start.go:297] selected driver: docker
I0918 19:38:11.309770 8317 start.go:901] validating driver "docker" against <nil>
I0918 19:38:11.309783 8317 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0918 19:38:11.310384 8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0918 19:38:11.366839 8317 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:11.355172228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0918 19:38:11.367038 8317 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0918 19:38:11.367307 8317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0918 19:38:11.368740 8317 out.go:177] * Using Docker driver with root privileges
I0918 19:38:11.370000 8317 cni.go:84] Creating CNI manager for ""
I0918 19:38:11.370080 8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:38:11.370095 8317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0918 19:38:11.370177 8317 start.go:340] cluster config:
{Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0918 19:38:11.371712 8317 out.go:177] * Starting "addons-923322" primary control-plane node in "addons-923322" cluster
I0918 19:38:11.372986 8317 cache.go:121] Beginning downloading kic base image for docker with docker
I0918 19:38:11.374235 8317 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
I0918 19:38:11.375404 8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:11.375466 8317 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0918 19:38:11.375479 8317 cache.go:56] Caching tarball of preloaded images
I0918 19:38:11.375492 8317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
I0918 19:38:11.375556 8317 preload.go:172] Found /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0918 19:38:11.375566 8317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0918 19:38:11.375910 8317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json ...
I0918 19:38:11.375938 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json: {Name:mk413e862c8527b15a3dc7cd54f06f1891ae5447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:11.391452 8317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
I0918 19:38:11.391585 8317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
I0918 19:38:11.391620 8317 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
I0918 19:38:11.391625 8317 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
I0918 19:38:11.391633 8317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
I0918 19:38:11.391639 8317 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
I0918 19:38:28.909573 8317 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
I0918 19:38:28.909612 8317 cache.go:194] Successfully downloaded all kic artifacts
I0918 19:38:28.909658 8317 start.go:360] acquireMachinesLock for addons-923322: {Name:mk40670ccc3fb08a13df272a775834621a889ecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 19:38:28.909812 8317 start.go:364] duration metric: took 125.379µs to acquireMachinesLock for "addons-923322"
I0918 19:38:28.909854 8317 start.go:93] Provisioning new machine with config: &{Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0918 19:38:28.909956 8317 start.go:125] createHost starting for "" (driver="docker")
I0918 19:38:28.912711 8317 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0918 19:38:28.912989 8317 start.go:159] libmachine.API.Create for "addons-923322" (driver="docker")
I0918 19:38:28.913027 8317 client.go:168] LocalClient.Create starting
I0918 19:38:28.913167 8317 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem
I0918 19:38:29.254436 8317 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem
I0918 19:38:29.536671 8317 cli_runner.go:164] Run: docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0918 19:38:29.562202 8317 cli_runner.go:211] docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0918 19:38:29.562296 8317 network_create.go:284] running [docker network inspect addons-923322] to gather additional debugging logs...
I0918 19:38:29.562321 8317 cli_runner.go:164] Run: docker network inspect addons-923322
W0918 19:38:29.577352 8317 cli_runner.go:211] docker network inspect addons-923322 returned with exit code 1
I0918 19:38:29.577387 8317 network_create.go:287] error running [docker network inspect addons-923322]: docker network inspect addons-923322: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-923322 not found
I0918 19:38:29.577400 8317 network_create.go:289] output of [docker network inspect addons-923322]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-923322 not found
** /stderr **
I0918 19:38:29.577500 8317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 19:38:29.595996 8317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c840}
I0918 19:38:29.596044 8317 network_create.go:124] attempt to create docker network addons-923322 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0918 19:38:29.596103 8317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-923322 addons-923322
I0918 19:38:29.660537 8317 network_create.go:108] docker network addons-923322 192.168.49.0/24 created
I0918 19:38:29.660568 8317 kic.go:121] calculated static IP "192.168.49.2" for the "addons-923322" container
I0918 19:38:29.660640 8317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0918 19:38:29.678309 8317 cli_runner.go:164] Run: docker volume create addons-923322 --label name.minikube.sigs.k8s.io=addons-923322 --label created_by.minikube.sigs.k8s.io=true
I0918 19:38:29.695271 8317 oci.go:103] Successfully created a docker volume addons-923322
I0918 19:38:29.695364 8317 cli_runner.go:164] Run: docker run --rm --name addons-923322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --entrypoint /usr/bin/test -v addons-923322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
I0918 19:38:31.809829 8317 cli_runner.go:217] Completed: docker run --rm --name addons-923322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --entrypoint /usr/bin/test -v addons-923322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.114405088s)
I0918 19:38:31.809858 8317 oci.go:107] Successfully prepared a docker volume addons-923322
I0918 19:38:31.809881 8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:31.809901 8317 kic.go:194] Starting extracting preloaded images to volume ...
I0918 19:38:31.809969 8317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-923322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
I0918 19:38:35.561012 8317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-923322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.751000338s)
I0918 19:38:35.561042 8317 kic.go:203] duration metric: took 3.751138796s to extract preloaded images to volume ...
W0918 19:38:35.561188 8317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0918 19:38:35.561302 8317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0918 19:38:35.618595 8317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-923322 --name addons-923322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-923322 --network addons-923322 --ip 192.168.49.2 --volume addons-923322:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
I0918 19:38:35.987353 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Running}}
I0918 19:38:36.008442 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:38:36.047830 8317 cli_runner.go:164] Run: docker exec addons-923322 stat /var/lib/dpkg/alternatives/iptables
I0918 19:38:36.128869 8317 oci.go:144] the created container "addons-923322" has a running status.
I0918 19:38:36.128907 8317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa...
I0918 19:38:36.435484 8317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0918 19:38:36.477363 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:38:36.503978 8317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0918 19:38:36.504016 8317 kic_runner.go:114] Args: [docker exec --privileged addons-923322 chown docker:docker /home/docker/.ssh/authorized_keys]
I0918 19:38:36.597104 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:38:36.624222 8317 machine.go:93] provisionDockerMachine start ...
I0918 19:38:36.624313 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:36.649282 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:36.649536 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:36.649553 8317 main.go:141] libmachine: About to run SSH command:
hostname
I0918 19:38:36.839880 8317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-923322
I0918 19:38:36.839908 8317 ubuntu.go:169] provisioning hostname "addons-923322"
I0918 19:38:36.839975 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:36.860426 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:36.860662 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:36.860674 8317 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-923322 && echo "addons-923322" | sudo tee /etc/hostname
I0918 19:38:37.032003 8317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-923322
I0918 19:38:37.032116 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:37.055902 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:37.056163 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:37.056179 8317 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-923322' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-923322/g' /etc/hosts;
else
echo '127.0.1.1 addons-923322' | sudo tee -a /etc/hosts;
fi
fi
I0918 19:38:37.211677 8317 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0918 19:38:37.211714 8317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-2236/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-2236/.minikube}
I0918 19:38:37.211748 8317 ubuntu.go:177] setting up certificates
I0918 19:38:37.211767 8317 provision.go:84] configureAuth start
I0918 19:38:37.211851 8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
I0918 19:38:37.229681 8317 provision.go:143] copyHostCerts
I0918 19:38:37.229768 8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem (1078 bytes)
I0918 19:38:37.229897 8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem (1123 bytes)
I0918 19:38:37.229958 8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem (1675 bytes)
I0918 19:38:37.230008 8317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem org=jenkins.addons-923322 san=[127.0.0.1 192.168.49.2 addons-923322 localhost minikube]
I0918 19:38:37.631268 8317 provision.go:177] copyRemoteCerts
I0918 19:38:37.631333 8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0918 19:38:37.631383 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:37.648811 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:38:37.752152 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0918 19:38:37.779201 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0918 19:38:37.804230 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0918 19:38:37.828720 8317 provision.go:87] duration metric: took 616.927396ms to configureAuth
I0918 19:38:37.828755 8317 ubuntu.go:193] setting minikube options for container-runtime
I0918 19:38:37.828986 8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:38:37.829051 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:37.847331 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:37.847580 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:37.847599 8317 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0918 19:38:37.991652 8317 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0918 19:38:37.991672 8317 ubuntu.go:71] root file system type: overlay
I0918 19:38:37.991777 8317 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0918 19:38:37.991849 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:38.010817 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:38.011074 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:38.011157 8317 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0918 19:38:38.175750 8317 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0918 19:38:38.175835 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:38.193566 8317 main.go:141] libmachine: Using SSH client type: native
I0918 19:38:38.193825 8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0918 19:38:38.193850 8317 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0918 19:38:38.975692 8317 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-09-06 12:06:36.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-09-18 19:38:38.169317622 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0918 19:38:38.975754 8317 machine.go:96] duration metric: took 2.351509928s to provisionDockerMachine
I0918 19:38:38.975764 8317 client.go:171] duration metric: took 10.062728262s to LocalClient.Create
I0918 19:38:38.975776 8317 start.go:167] duration metric: took 10.062788872s to libmachine.API.Create "addons-923322"
I0918 19:38:38.975784 8317 start.go:293] postStartSetup for "addons-923322" (driver="docker")
I0918 19:38:38.975801 8317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0918 19:38:38.975867 8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0918 19:38:38.975912 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:38.994366 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:38:39.096834 8317 ssh_runner.go:195] Run: cat /etc/os-release
I0918 19:38:39.100458 8317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0918 19:38:39.100495 8317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0918 19:38:39.100508 8317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0918 19:38:39.100518 8317 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0918 19:38:39.100529 8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/addons for local assets ...
I0918 19:38:39.100607 8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/files for local assets ...
I0918 19:38:39.100632 8317 start.go:296] duration metric: took 124.836236ms for postStartSetup
I0918 19:38:39.100962 8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
I0918 19:38:39.118348 8317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json ...
I0918 19:38:39.118643 8317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0918 19:38:39.118696 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:39.135848 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:38:39.232161 8317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0918 19:38:39.236803 8317 start.go:128] duration metric: took 10.326802801s to createHost
I0918 19:38:39.236826 8317 start.go:83] releasing machines lock for "addons-923322", held for 10.32699624s
I0918 19:38:39.236905 8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
I0918 19:38:39.253561 8317 ssh_runner.go:195] Run: cat /version.json
I0918 19:38:39.253623 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:39.253919 8317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0918 19:38:39.253995 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:38:39.279652 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:38:39.282987 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:38:39.505838 8317 ssh_runner.go:195] Run: systemctl --version
I0918 19:38:39.510159 8317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0918 19:38:39.514664 8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0918 19:38:39.540493 8317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0918 19:38:39.540572 8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0918 19:38:39.567367 8317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0918 19:38:39.567434 8317 start.go:495] detecting cgroup driver to use...
I0918 19:38:39.567474 8317 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0918 19:38:39.567583 8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0918 19:38:39.584583 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0918 19:38:39.594241 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0918 19:38:39.603947 8317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0918 19:38:39.604014 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0918 19:38:39.614009 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0918 19:38:39.624646 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0918 19:38:39.635418 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0918 19:38:39.645378 8317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0918 19:38:39.654937 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0918 19:38:39.665028 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0918 19:38:39.675343 8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0918 19:38:39.685561 8317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0918 19:38:39.694793 8317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0918 19:38:39.704015 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:38:39.793143 8317 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0918 19:38:39.890442 8317 start.go:495] detecting cgroup driver to use...
I0918 19:38:39.890545 8317 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0918 19:38:39.890638 8317 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0918 19:38:39.915605 8317 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0918 19:38:39.915674 8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0918 19:38:39.931609 8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0918 19:38:39.951007 8317 ssh_runner.go:195] Run: which cri-dockerd
I0918 19:38:39.956315 8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0918 19:38:39.968745 8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0918 19:38:39.990865 8317 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0918 19:38:40.152272 8317 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0918 19:38:40.253078 8317 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0918 19:38:40.253208 8317 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0918 19:38:40.276703 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:38:40.363628 8317 ssh_runner.go:195] Run: sudo systemctl restart docker
I0918 19:38:40.623283 8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0918 19:38:40.636696 8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0918 19:38:40.649690 8317 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0918 19:38:40.742463 8317 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0918 19:38:40.823448 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:38:40.903407 8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0918 19:38:40.918161 8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0918 19:38:40.930434 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:38:41.025618 8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0918 19:38:41.096182 8317 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0918 19:38:41.096328 8317 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0918 19:38:41.101768 8317 start.go:563] Will wait 60s for crictl version
I0918 19:38:41.101884 8317 ssh_runner.go:195] Run: which crictl
I0918 19:38:41.106019 8317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0918 19:38:41.145130 8317 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0918 19:38:41.145242 8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0918 19:38:41.168096 8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0918 19:38:41.196847 8317 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0918 19:38:41.196945 8317 cli_runner.go:164] Run: docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 19:38:41.213577 8317 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0918 19:38:41.218348 8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0918 19:38:41.231368 8317 kubeadm.go:883] updating cluster {Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0918 19:38:41.231498 8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:41.231557 8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0918 19:38:41.250623 8317 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0918 19:38:41.250646 8317 docker.go:615] Images already preloaded, skipping extraction
I0918 19:38:41.250733 8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0918 19:38:41.269488 8317 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0918 19:38:41.269509 8317 cache_images.go:84] Images are preloaded, skipping loading
I0918 19:38:41.269518 8317 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
I0918 19:38:41.269609 8317 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-923322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0918 19:38:41.269686 8317 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0918 19:38:41.315345 8317 cni.go:84] Creating CNI manager for ""
I0918 19:38:41.315424 8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:38:41.315441 8317 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0918 19:38:41.315465 8317 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-923322 NodeName:addons-923322 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0918 19:38:41.315638 8317 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-923322"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0918 19:38:41.315713 8317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0918 19:38:41.324486 8317 binaries.go:44] Found k8s binaries, skipping transfer
I0918 19:38:41.324585 8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0918 19:38:41.333710 8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0918 19:38:41.351973 8317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0918 19:38:41.370351 8317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0918 19:38:41.389698 8317 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0918 19:38:41.393323 8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0918 19:38:41.404705 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:38:41.499911 8317 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0918 19:38:41.515332 8317 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322 for IP: 192.168.49.2
I0918 19:38:41.515397 8317 certs.go:194] generating shared ca certs ...
I0918 19:38:41.515430 8317 certs.go:226] acquiring lock for ca certs: {Name:mk958e02b356056556309ee300f2f34fdfb18284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:41.515594 8317 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key
I0918 19:38:41.935140 8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt ...
I0918 19:38:41.935173 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt: {Name:mkf111cf3b15e82ccb3baf57879afd2414af0c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:41.935394 8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key ...
I0918 19:38:41.935408 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key: {Name:mk477b03db8b73097773933aed42528067072d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:41.935501 8317 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key
I0918 19:38:42.191991 8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt ...
I0918 19:38:42.192028 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt: {Name:mk9f33625027085912b668e637f81c0e9aeb9347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:42.192241 8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key ...
I0918 19:38:42.192255 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key: {Name:mk990ac1af6211151bb505f89d8555cf1e9130ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:42.192339 8317 certs.go:256] generating profile certs ...
I0918 19:38:42.192404 8317 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key
I0918 19:38:42.192433 8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt with IP's: []
I0918 19:38:42.515618 8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt ...
I0918 19:38:42.515645 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: {Name:mkaa8b50d1d5114bb4732284de066e243de0dca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:42.515835 8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key ...
I0918 19:38:42.515849 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key: {Name:mkfcfdbdc2b8cd7ffc00401710f1d36e0fb59a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:42.515926 8317 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b
I0918 19:38:42.515954 8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0918 19:38:43.650468 8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b ...
I0918 19:38:43.650502 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b: {Name:mkc592eaf84c2356572fc618c3e4bc7ff514809b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:43.650683 8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b ...
I0918 19:38:43.650697 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b: {Name:mka6411585c3e092cf2a25636b75af98e4295e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:43.650780 8317 certs.go:381] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt
I0918 19:38:43.650863 8317 certs.go:385] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key
I0918 19:38:43.650917 8317 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key
I0918 19:38:43.650936 8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt with IP's: []
I0918 19:38:44.488973 8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt ...
I0918 19:38:44.489009 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt: {Name:mkd8eaa979655bcdecba0f9ea6e35c568f3aa35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:44.489209 8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key ...
I0918 19:38:44.489222 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key: {Name:mk381bf7a929d54726eff6684a7b7e9eeee5a02b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:44.489413 8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem (1679 bytes)
I0918 19:38:44.489456 8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem (1078 bytes)
I0918 19:38:44.489486 8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem (1123 bytes)
I0918 19:38:44.489516 8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem (1675 bytes)
I0918 19:38:44.490120 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0918 19:38:44.513973 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0918 19:38:44.539034 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0918 19:38:44.564127 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0918 19:38:44.589395 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0918 19:38:44.616442 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0918 19:38:44.644314 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0918 19:38:44.671032 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0918 19:38:44.697206 8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0918 19:38:44.722045 8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0918 19:38:44.741487 8317 ssh_runner.go:195] Run: openssl version
I0918 19:38:44.747137 8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0918 19:38:44.757500 8317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:44.761196 8317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:44.761301 8317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:44.768893 8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0918 19:38:44.778270 8317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0918 19:38:44.781727 8317 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0918 19:38:44.781776 8317 kubeadm.go:392] StartCluster: {Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0918 19:38:44.781909 8317 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0918 19:38:44.798609 8317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0918 19:38:44.807449 8317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0918 19:38:44.816582 8317 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0918 19:38:44.816652 8317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0918 19:38:44.826554 8317 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0918 19:38:44.826620 8317 kubeadm.go:157] found existing configuration files:
I0918 19:38:44.826691 8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0918 19:38:44.835914 8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0918 19:38:44.835983 8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0918 19:38:44.845006 8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0918 19:38:44.854008 8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0918 19:38:44.854097 8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0918 19:38:44.862975 8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0918 19:38:44.872704 8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0918 19:38:44.872818 8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0918 19:38:44.881473 8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0918 19:38:44.890394 8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0918 19:38:44.890497 8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0918 19:38:44.899085 8317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0918 19:38:44.943015 8317 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0918 19:38:44.943104 8317 kubeadm.go:310] [preflight] Running pre-flight checks
I0918 19:38:44.968654 8317 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0918 19:38:44.968839 8317 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1070-aws[0m
I0918 19:38:44.968916 8317 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0918 19:38:44.968992 8317 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0918 19:38:44.969070 8317 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0918 19:38:44.969147 8317 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0918 19:38:44.969232 8317 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0918 19:38:44.969310 8317 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0918 19:38:44.969400 8317 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0918 19:38:44.969479 8317 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0918 19:38:44.969566 8317 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0918 19:38:44.969642 8317 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0918 19:38:45.082279 8317 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0918 19:38:45.082425 8317 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0918 19:38:45.082532 8317 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0918 19:38:45.106095 8317 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0918 19:38:45.110749 8317 out.go:235] - Generating certificates and keys ...
I0918 19:38:45.111012 8317 kubeadm.go:310] [certs] Using existing ca certificate authority
I0918 19:38:45.111145 8317 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0918 19:38:45.481358 8317 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0918 19:38:46.038653 8317 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0918 19:38:46.455237 8317 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0918 19:38:46.906953 8317 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0918 19:38:47.540293 8317 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0918 19:38:47.540595 8317 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-923322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0918 19:38:48.147649 8317 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0918 19:38:48.147874 8317 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-923322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0918 19:38:48.465599 8317 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0918 19:38:48.678189 8317 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0918 19:38:49.270631 8317 kubeadm.go:310] [certs] Generating "sa" key and public key
I0918 19:38:49.270849 8317 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0918 19:38:49.555442 8317 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0918 19:38:50.420523 8317 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0918 19:38:51.171657 8317 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0918 19:38:51.726296 8317 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0918 19:38:52.169784 8317 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0918 19:38:52.170816 8317 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0918 19:38:52.174160 8317 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0918 19:38:52.176108 8317 out.go:235] - Booting up control plane ...
I0918 19:38:52.176212 8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0918 19:38:52.176289 8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0918 19:38:52.177645 8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0918 19:38:52.190406 8317 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0918 19:38:52.196953 8317 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0918 19:38:52.197009 8317 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0918 19:38:52.299744 8317 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0918 19:38:52.299864 8317 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0918 19:38:53.296316 8317 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001652251s
I0918 19:38:53.296437 8317 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0918 19:38:59.297834 8317 kubeadm.go:310] [api-check] The API server is healthy after 6.001657206s
I0918 19:38:59.323589 8317 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0918 19:38:59.344013 8317 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0918 19:38:59.378343 8317 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0918 19:38:59.378785 8317 kubeadm.go:310] [mark-control-plane] Marking the node addons-923322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0918 19:38:59.393303 8317 kubeadm.go:310] [bootstrap-token] Using token: 96pzjz.thy6lyeyktx1vx9a
I0918 19:38:59.396113 8317 out.go:235] - Configuring RBAC rules ...
I0918 19:38:59.396244 8317 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0918 19:38:59.405658 8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0918 19:38:59.413935 8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0918 19:38:59.420442 8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0918 19:38:59.425820 8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0918 19:38:59.430123 8317 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0918 19:38:59.705030 8317 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0918 19:39:00.188651 8317 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0918 19:39:00.704101 8317 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0918 19:39:00.705355 8317 kubeadm.go:310]
I0918 19:39:00.705430 8317 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0918 19:39:00.705436 8317 kubeadm.go:310]
I0918 19:39:00.705527 8317 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0918 19:39:00.705544 8317 kubeadm.go:310]
I0918 19:39:00.705570 8317 kubeadm.go:310] mkdir -p $HOME/.kube
I0918 19:39:00.705634 8317 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0918 19:39:00.705689 8317 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0918 19:39:00.705698 8317 kubeadm.go:310]
I0918 19:39:00.705752 8317 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0918 19:39:00.705760 8317 kubeadm.go:310]
I0918 19:39:00.705813 8317 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0918 19:39:00.705821 8317 kubeadm.go:310]
I0918 19:39:00.705873 8317 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0918 19:39:00.705952 8317 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0918 19:39:00.706025 8317 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0918 19:39:00.706034 8317 kubeadm.go:310]
I0918 19:39:00.706119 8317 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0918 19:39:00.706203 8317 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0918 19:39:00.706212 8317 kubeadm.go:310]
I0918 19:39:00.706297 8317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 96pzjz.thy6lyeyktx1vx9a \
I0918 19:39:00.706404 8317 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:9eecf3dbed3b3dd0d2c4f53b9183d7bca1cdee4ca3fecbf261d3f759ffc8a8d8 \
I0918 19:39:00.706428 8317 kubeadm.go:310] --control-plane
I0918 19:39:00.706436 8317 kubeadm.go:310]
I0918 19:39:00.706531 8317 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0918 19:39:00.706541 8317 kubeadm.go:310]
I0918 19:39:00.706624 8317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 96pzjz.thy6lyeyktx1vx9a \
I0918 19:39:00.706732 8317 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:9eecf3dbed3b3dd0d2c4f53b9183d7bca1cdee4ca3fecbf261d3f759ffc8a8d8
I0918 19:39:00.710356 8317 kubeadm.go:310] W0918 19:38:44.934790 1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0918 19:39:00.710662 8317 kubeadm.go:310] W0918 19:38:44.935805 1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0918 19:39:00.710885 8317 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
I0918 19:39:00.710994 8317 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0918 19:39:00.711013 8317 cni.go:84] Creating CNI manager for ""
I0918 19:39:00.711031 8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:39:00.713910 8317 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0918 19:39:00.716824 8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0918 19:39:00.725846 8317 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0918 19:39:00.746943 8317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0918 19:39:00.747072 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:00.747165 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-923322 minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-923322 minikube.k8s.io/primary=true
I0918 19:39:00.985496 8317 ops.go:34] apiserver oom_adj: -16
I0918 19:39:00.985647 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:01.485785 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:01.986567 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:02.485797 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:02.985997 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:03.486321 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:03.985991 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:04.485879 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:04.985809 8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:39:05.104949 8317 kubeadm.go:1113] duration metric: took 4.357920781s to wait for elevateKubeSystemPrivileges
I0918 19:39:05.104983 8317 kubeadm.go:394] duration metric: took 20.323211845s to StartCluster
I0918 19:39:05.105004 8317 settings.go:142] acquiring lock: {Name:mka60e55fdc2e0389e1fbfa23792ee022689e7b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:39:05.105150 8317 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19667-2236/kubeconfig
I0918 19:39:05.105558 8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/kubeconfig: {Name:mk8ee68a7fcf0033412d5c9abf2a4743eba0e82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:39:05.105767 8317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0918 19:39:05.105894 8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0918 19:39:05.106160 8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:39:05.106214 8317 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0918 19:39:05.106303 8317 addons.go:69] Setting yakd=true in profile "addons-923322"
I0918 19:39:05.106320 8317 addons.go:234] Setting addon yakd=true in "addons-923322"
I0918 19:39:05.106345 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.106837 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.107305 8317 addons.go:69] Setting metrics-server=true in profile "addons-923322"
I0918 19:39:05.107333 8317 addons.go:234] Setting addon metrics-server=true in "addons-923322"
I0918 19:39:05.107370 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.107831 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.112005 8317 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-923322"
I0918 19:39:05.112097 8317 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-923322"
I0918 19:39:05.112173 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.112293 8317 addons.go:69] Setting cloud-spanner=true in profile "addons-923322"
I0918 19:39:05.112404 8317 addons.go:234] Setting addon cloud-spanner=true in "addons-923322"
I0918 19:39:05.112450 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.112638 8317 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-923322"
I0918 19:39:05.112676 8317 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-923322"
I0918 19:39:05.112696 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.113147 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.115694 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.119761 8317 addons.go:69] Setting default-storageclass=true in profile "addons-923322"
I0918 19:39:05.119856 8317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-923322"
I0918 19:39:05.120258 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.123452 8317 addons.go:69] Setting registry=true in profile "addons-923322"
I0918 19:39:05.123534 8317 addons.go:234] Setting addon registry=true in "addons-923322"
I0918 19:39:05.123612 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.124136 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.124666 8317 addons.go:69] Setting gcp-auth=true in profile "addons-923322"
I0918 19:39:05.124711 8317 mustload.go:65] Loading cluster: addons-923322
I0918 19:39:05.124891 8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:39:05.125133 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.135100 8317 addons.go:69] Setting storage-provisioner=true in profile "addons-923322"
I0918 19:39:05.135230 8317 addons.go:234] Setting addon storage-provisioner=true in "addons-923322"
I0918 19:39:05.135325 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.136041 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.137209 8317 addons.go:69] Setting ingress=true in profile "addons-923322"
I0918 19:39:05.137242 8317 addons.go:234] Setting addon ingress=true in "addons-923322"
I0918 19:39:05.137289 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.137758 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.159017 8317 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-923322"
I0918 19:39:05.159054 8317 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-923322"
I0918 19:39:05.159330 8317 addons.go:69] Setting ingress-dns=true in profile "addons-923322"
I0918 19:39:05.159412 8317 addons.go:234] Setting addon ingress-dns=true in "addons-923322"
I0918 19:39:05.159486 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.160228 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.160695 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.175319 8317 addons.go:69] Setting volcano=true in profile "addons-923322"
I0918 19:39:05.175355 8317 addons.go:234] Setting addon volcano=true in "addons-923322"
I0918 19:39:05.175400 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.175910 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.176135 8317 addons.go:69] Setting inspektor-gadget=true in profile "addons-923322"
I0918 19:39:05.176163 8317 addons.go:234] Setting addon inspektor-gadget=true in "addons-923322"
I0918 19:39:05.176198 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.176643 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.188439 8317 out.go:177] * Verifying Kubernetes components...
I0918 19:39:05.203989 8317 addons.go:69] Setting volumesnapshots=true in profile "addons-923322"
I0918 19:39:05.204026 8317 addons.go:234] Setting addon volumesnapshots=true in "addons-923322"
I0918 19:39:05.204064 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.204566 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.265990 8317 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0918 19:39:05.269671 8317 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0918 19:39:05.269828 8317 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0918 19:39:05.269856 8317 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0918 19:39:05.269959 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.283712 8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0918 19:39:05.289568 8317 addons.go:234] Setting addon default-storageclass=true in "addons-923322"
I0918 19:39:05.289665 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.290132 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.292418 8317 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0918 19:39:05.292907 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.313992 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.318404 8317 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0918 19:39:05.318535 8317 out.go:177] - Using image docker.io/registry:2.8.3
I0918 19:39:05.318785 8317 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0918 19:39:05.318798 8317 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0918 19:39:05.318862 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.334197 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0918 19:39:05.353890 8317 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0918 19:39:05.353976 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0918 19:39:05.367644 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.355084 8317 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-923322"
I0918 19:39:05.374631 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:05.377626 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:05.384517 8317 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0918 19:39:05.388916 8317 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0918 19:39:05.388975 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0918 19:39:05.389063 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.406811 8317 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0918 19:39:05.406979 8317 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0918 19:39:05.407066 8317 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0918 19:39:05.407282 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0918 19:39:05.433428 8317 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0918 19:39:05.433514 8317 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0918 19:39:05.433625 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.436923 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0918 19:39:05.441303 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0918 19:39:05.447776 8317 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0918 19:39:05.447798 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0918 19:39:05.447864 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.451453 8317 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0918 19:39:05.455654 8317 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0918 19:39:05.463298 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0918 19:39:05.464733 8317 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0918 19:39:05.464764 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0918 19:39:05.464834 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.479522 8317 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0918 19:39:05.479763 8317 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0918 19:39:05.487198 8317 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0918 19:39:05.487456 8317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:39:05.487489 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0918 19:39:05.487582 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.492512 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0918 19:39:05.492541 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0918 19:39:05.492624 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.517317 8317 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0918 19:39:05.520087 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0918 19:39:05.520114 8317 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0918 19:39:05.520184 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.530045 8317 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0918 19:39:05.535674 8317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0918 19:39:05.535767 8317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0918 19:39:05.535844 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.546103 8317 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0918 19:39:05.546125 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0918 19:39:05.546188 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.549600 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.551631 8317 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0918 19:39:05.561914 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.571301 8317 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0918 19:39:05.575766 8317 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0918 19:39:05.579356 8317 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0918 19:39:05.579377 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0918 19:39:05.579440 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.630294 8317 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0918 19:39:05.633693 8317 out.go:177] - Using image docker.io/busybox:stable
I0918 19:39:05.636807 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.637638 8317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0918 19:39:05.637658 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0918 19:39:05.637719 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:05.651508 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.686815 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.713424 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.715661 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.719625 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.763839 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.764245 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.777174 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:05.778594 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
W0918 19:39:05.780528 8317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0918 19:39:05.780563 8317 retry.go:31] will retry after 310.121584ms: ssh: handshake failed: EOF
I0918 19:39:05.799001 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
W0918 19:39:05.800227 8317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0918 19:39:05.800252 8317 retry.go:31] will retry after 155.211495ms: ssh: handshake failed: EOF
I0918 19:39:05.808792 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:06.098336 8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0918 19:39:06.098491 8317 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0918 19:39:06.630986 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0918 19:39:06.802995 8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0918 19:39:06.803069 8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0918 19:39:06.817310 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:39:06.837823 8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0918 19:39:06.837892 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0918 19:39:06.878694 8317 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0918 19:39:06.878772 8317 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0918 19:39:06.885476 8317 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0918 19:39:06.885545 8317 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0918 19:39:06.934218 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0918 19:39:06.951456 8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0918 19:39:06.951533 8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0918 19:39:06.954287 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0918 19:39:06.978575 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0918 19:39:06.995336 8317 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0918 19:39:06.995413 8317 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0918 19:39:07.013465 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0918 19:39:07.013487 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0918 19:39:07.033888 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0918 19:39:07.037932 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0918 19:39:07.120426 8317 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0918 19:39:07.120501 8317 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0918 19:39:07.160113 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0918 19:39:07.160186 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0918 19:39:07.165365 8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0918 19:39:07.165440 8317 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0918 19:39:07.193778 8317 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0918 19:39:07.193855 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0918 19:39:07.204233 8317 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0918 19:39:07.204296 8317 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0918 19:39:07.211963 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0918 19:39:07.220172 8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0918 19:39:07.220247 8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0918 19:39:07.352664 8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0918 19:39:07.352751 8317 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0918 19:39:07.424821 8317 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0918 19:39:07.424903 8317 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0918 19:39:07.438775 8317 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0918 19:39:07.438842 8317 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0918 19:39:07.531673 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0918 19:39:07.534812 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0918 19:39:07.534888 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0918 19:39:07.572971 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0918 19:39:07.573060 8317 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0918 19:39:07.703628 8317 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0918 19:39:07.703708 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0918 19:39:07.710069 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0918 19:39:07.738049 8317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0918 19:39:07.738134 8317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0918 19:39:07.767125 8317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0918 19:39:07.767197 8317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0918 19:39:07.929895 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0918 19:39:07.929977 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0918 19:39:08.013762 8317 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:39:08.013834 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0918 19:39:08.171181 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:39:08.187935 8317 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0918 19:39:08.188014 8317 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0918 19:39:08.190795 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0918 19:39:08.331124 8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0918 19:39:08.331200 8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0918 19:39:08.341548 8317 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.243026473s)
I0918 19:39:08.341717 8317 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243346737s)
I0918 19:39:08.341769 8317 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0918 19:39:08.343311 8317 node_ready.go:35] waiting up to 6m0s for node "addons-923322" to be "Ready" ...
I0918 19:39:08.347235 8317 node_ready.go:49] node "addons-923322" has status "Ready":"True"
I0918 19:39:08.347318 8317 node_ready.go:38] duration metric: took 3.97799ms for node "addons-923322" to be "Ready" ...
I0918 19:39:08.347376 8317 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0918 19:39:08.370085 8317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace to be "Ready" ...
I0918 19:39:08.487975 8317 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0918 19:39:08.488046 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0918 19:39:08.653242 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0918 19:39:08.653314 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0918 19:39:08.729990 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0918 19:39:08.846855 8317 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-923322" context rescaled to 1 replicas
I0918 19:39:09.098320 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0918 19:39:09.098347 8317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0918 19:39:09.151426 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0918 19:39:09.151450 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0918 19:39:09.177027 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0918 19:39:09.177052 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0918 19:39:09.200049 8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0918 19:39:09.200076 8317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0918 19:39:09.222555 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0918 19:39:10.418929 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:12.325387 8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0918 19:39:12.325541 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:12.354091 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:12.881123 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:13.536360 8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0918 19:39:13.923755 8317 addons.go:234] Setting addon gcp-auth=true in "addons-923322"
I0918 19:39:13.923860 8317 host.go:66] Checking if "addons-923322" exists ...
I0918 19:39:13.924427 8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
I0918 19:39:13.958533 8317 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0918 19:39:13.958589 8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
I0918 19:39:13.985386 8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
I0918 19:39:15.376550 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:17.537058 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:18.386750 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.75567805s)
I0918 19:39:18.386909 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.569527313s)
I0918 19:39:18.386986 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.452673423s)
I0918 19:39:18.387068 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.432698229s)
I0918 19:39:18.387319 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.40867665s)
I0918 19:39:18.387477 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.353569674s)
I0918 19:39:18.387616 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.349663642s)
I0918 19:39:18.387646 8317 addons.go:475] Verifying addon ingress=true in "addons-923322"
I0918 19:39:18.387882 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.175845839s)
I0918 19:39:18.388117 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.856372369s)
I0918 19:39:18.388134 8317 addons.go:475] Verifying addon registry=true in "addons-923322"
I0918 19:39:18.388405 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.678249913s)
I0918 19:39:18.388542 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.217256672s)
W0918 19:39:18.388597 8317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0918 19:39:18.388856 8317 retry.go:31] will retry after 354.904914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0918 19:39:18.388921 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.198053001s)
I0918 19:39:18.388442 8317 addons.go:475] Verifying addon metrics-server=true in "addons-923322"
I0918 19:39:18.389133 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.659111267s)
I0918 19:39:18.391143 8317 out.go:177] * Verifying registry addon...
I0918 19:39:18.391219 8317 out.go:177] * Verifying ingress addon...
I0918 19:39:18.394259 8317 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-923322 service yakd-dashboard -n yakd-dashboard
I0918 19:39:18.395189 8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0918 19:39:18.396120 8317 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0918 19:39:18.435793 8317 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0918 19:39:18.435821 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:18.436874 8317 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0918 19:39:18.436893 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W0918 19:39:18.489412 8317 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0918 19:39:18.744418 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:39:18.929726 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:18.930399 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:19.097383 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.874770223s)
I0918 19:39:19.097420 8317 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-923322"
I0918 19:39:19.097671 8317 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.139114696s)
I0918 19:39:19.100648 8317 out.go:177] * Verifying csi-hostpath-driver addon...
I0918 19:39:19.100771 8317 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0918 19:39:19.103644 8317 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0918 19:39:19.104527 8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0918 19:39:19.112005 8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0918 19:39:19.112036 8317 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0918 19:39:19.113912 8317 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:39:19.113940 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:19.241888 8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0918 19:39:19.241921 8317 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0918 19:39:19.334561 8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0918 19:39:19.334592 8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0918 19:39:19.403340 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:19.404658 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:19.428569 8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0918 19:39:19.609500 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:19.885665 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:19.930291 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:19.931003 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:20.119330 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:20.402008 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:20.402643 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:20.610139 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:20.903770 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:20.908468 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:21.071305 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.326834314s)
I0918 19:39:21.080442 8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.651780197s)
I0918 19:39:21.087579 8317 addons.go:475] Verifying addon gcp-auth=true in "addons-923322"
I0918 19:39:21.092179 8317 out.go:177] * Verifying gcp-auth addon...
I0918 19:39:21.095765 8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0918 19:39:21.099126 8317 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0918 19:39:21.110355 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:21.400986 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:21.401966 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:21.609949 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:21.902986 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:21.905021 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:22.109713 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:22.377076 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:22.401027 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:22.402524 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:22.611321 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:22.903185 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:22.903608 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:23.109607 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:23.403771 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:23.404671 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:23.609320 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:23.900124 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:23.902262 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:24.110052 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:24.377227 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:24.399374 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:24.401220 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:24.610327 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:24.901954 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:24.903676 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:25.110819 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:25.401695 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:25.403489 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:25.610418 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:25.899581 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:25.902598 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:26.109730 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:26.378023 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:26.401209 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:26.401997 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:26.609774 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:26.900291 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:26.901951 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:27.109382 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:27.400932 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:27.401417 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:27.611153 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:27.900909 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:27.904582 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:28.109858 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:28.400941 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:28.402832 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:28.610615 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:28.876668 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:28.902115 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:28.903574 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:29.109165 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:29.400066 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:29.401534 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:29.609273 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:29.902428 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:29.903903 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:30.141629 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:30.401846 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:30.402424 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:30.609302 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:30.877747 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:30.902748 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:30.904408 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:31.110642 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:31.403179 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:31.404217 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:31.610435 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:31.898819 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:31.906072 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:32.110404 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:32.400411 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:32.400903 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:32.609358 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:32.900013 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:32.901264 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:33.110437 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:33.376912 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:33.402460 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:33.402783 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:33.609819 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:33.901256 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:33.901778 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:34.109189 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:34.415676 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:34.416810 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:34.613125 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:34.903123 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:34.904123 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:35.111704 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:35.377759 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:35.399778 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:35.404667 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:35.611389 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:35.901682 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:35.902524 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:36.109635 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:36.400046 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:36.401012 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:36.609655 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:36.902573 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:36.903767 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:37.111300 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:37.401370 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:37.403518 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:37.608925 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:37.877572 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:37.903056 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:37.904465 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:38.110452 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:38.399766 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:38.401677 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:38.610155 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:38.902260 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:38.903464 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:39.110404 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:39.400393 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:39.402711 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:39.609999 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:39.877937 8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:39.915913 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:39.917261 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:40.118288 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:40.378198 8317 pod_ready.go:93] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.378224 8317 pod_ready.go:82] duration metric: took 32.008047555s for pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.378236 8317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.380524 8317 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xsvnk" not found
I0918 19:39:40.380621 8317 pod_ready.go:82] duration metric: took 2.370461ms for pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace to be "Ready" ...
E0918 19:39:40.380650 8317 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xsvnk" not found
I0918 19:39:40.380684 8317 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.386469 8317 pod_ready.go:93] pod "etcd-addons-923322" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.386537 8317 pod_ready.go:82] duration metric: took 5.823217ms for pod "etcd-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.386564 8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.392934 8317 pod_ready.go:93] pod "kube-apiserver-addons-923322" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.393012 8317 pod_ready.go:82] duration metric: took 6.425166ms for pod "kube-apiserver-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.393038 8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.403793 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:40.404138 8317 pod_ready.go:93] pod "kube-controller-manager-addons-923322" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.404150 8317 pod_ready.go:82] duration metric: took 11.089651ms for pod "kube-controller-manager-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.404161 8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c2h5g" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.406597 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:40.574787 8317 pod_ready.go:93] pod "kube-proxy-c2h5g" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.574815 8317 pod_ready.go:82] duration metric: took 170.646635ms for pod "kube-proxy-c2h5g" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.574827 8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.609679 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:40.903224 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:40.904549 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:40.975183 8317 pod_ready.go:93] pod "kube-scheduler-addons-923322" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:40.975210 8317 pod_ready.go:82] duration metric: took 400.375731ms for pod "kube-scheduler-addons-923322" in "kube-system" namespace to be "Ready" ...
I0918 19:39:40.975223 8317 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace to be "Ready" ...
I0918 19:39:41.113674 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:41.374475 8317 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:41.374558 8317 pod_ready.go:82] duration metric: took 399.325225ms for pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace to be "Ready" ...
I0918 19:39:41.374584 8317 pod_ready.go:39] duration metric: took 33.02716277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0918 19:39:41.374632 8317 api_server.go:52] waiting for apiserver process to appear ...
I0918 19:39:41.374723 8317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:39:41.392894 8317 api_server.go:72] duration metric: took 36.287089728s to wait for apiserver process to appear ...
I0918 19:39:41.392921 8317 api_server.go:88] waiting for apiserver healthz status ...
I0918 19:39:41.392943 8317 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0918 19:39:41.401502 8317 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0918 19:39:41.403309 8317 api_server.go:141] control plane version: v1.31.1
I0918 19:39:41.403352 8317 api_server.go:131] duration metric: took 10.424121ms to wait for apiserver health ...
I0918 19:39:41.403362 8317 system_pods.go:43] waiting for kube-system pods to appear ...
I0918 19:39:41.405130 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:41.407782 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:41.581317 8317 system_pods.go:59] 17 kube-system pods found
I0918 19:39:41.581356 8317 system_pods.go:61] "coredns-7c65d6cfc9-2g4l7" [c4764c8a-196f-4d05-87d9-0c7d78489b01] Running
I0918 19:39:41.581365 8317 system_pods.go:61] "csi-hostpath-attacher-0" [097868b0-2207-40a0-8638-29d43c76956f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0918 19:39:41.581374 8317 system_pods.go:61] "csi-hostpath-resizer-0" [27f2f88c-98ce-450b-9dd4-39098fa9d3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0918 19:39:41.581382 8317 system_pods.go:61] "csi-hostpathplugin-qg252" [c24860db-28aa-4eca-aa5e-a23c98d972b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0918 19:39:41.581387 8317 system_pods.go:61] "etcd-addons-923322" [16737166-da6e-4bdb-9dc7-f99f689862bd] Running
I0918 19:39:41.581393 8317 system_pods.go:61] "kube-apiserver-addons-923322" [b9c7d74b-a9a4-442a-a79f-cb524b0620fa] Running
I0918 19:39:41.581398 8317 system_pods.go:61] "kube-controller-manager-addons-923322" [0b1c1ad7-9dec-4a2f-8169-ed1ee5b84119] Running
I0918 19:39:41.581406 8317 system_pods.go:61] "kube-ingress-dns-minikube" [22538dc0-3ac3-4849-83e9-9fc02c69f1d9] Running
I0918 19:39:41.581413 8317 system_pods.go:61] "kube-proxy-c2h5g" [ec2420ba-b77d-4ef0-849d-aad464f1ef73] Running
I0918 19:39:41.581420 8317 system_pods.go:61] "kube-scheduler-addons-923322" [74a64aa9-7aa5-4dea-b57e-a60b25beb834] Running
I0918 19:39:41.581426 8317 system_pods.go:61] "metrics-server-84c5f94fbc-hwphq" [b9ffea56-bc3b-4b0e-b302-9726b4125780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0918 19:39:41.581430 8317 system_pods.go:61] "nvidia-device-plugin-daemonset-cddcv" [b574c98b-2a15-4629-9c56-0509a4565cf5] Running
I0918 19:39:41.581441 8317 system_pods.go:61] "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
I0918 19:39:41.581447 8317 system_pods.go:61] "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0918 19:39:41.581453 8317 system_pods.go:61] "snapshot-controller-56fcc65765-lwgp4" [db3a36fd-16b8-42f2-9ce8-efd2efdbc731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:41.581465 8317 system_pods.go:61] "snapshot-controller-56fcc65765-vp9xg" [4fb8cfa7-1048-4342-b3ef-7f8597d3541e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:41.581471 8317 system_pods.go:61] "storage-provisioner" [4a0413f0-5a79-47ac-856d-06ca4c5730d5] Running
I0918 19:39:41.581480 8317 system_pods.go:74] duration metric: took 178.112066ms to wait for pod list to return data ...
I0918 19:39:41.581487 8317 default_sa.go:34] waiting for default service account to be created ...
I0918 19:39:41.609613 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:41.778931 8317 default_sa.go:45] found service account: "default"
I0918 19:39:41.778966 8317 default_sa.go:55] duration metric: took 197.472023ms for default service account to be created ...
I0918 19:39:41.778975 8317 system_pods.go:116] waiting for k8s-apps to be running ...
I0918 19:39:41.900716 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:41.901547 8317 kapi.go:107] duration metric: took 23.50635637s to wait for kubernetes.io/minikube-addons=registry ...
I0918 19:39:41.980877 8317 system_pods.go:86] 17 kube-system pods found
I0918 19:39:41.980912 8317 system_pods.go:89] "coredns-7c65d6cfc9-2g4l7" [c4764c8a-196f-4d05-87d9-0c7d78489b01] Running
I0918 19:39:41.980923 8317 system_pods.go:89] "csi-hostpath-attacher-0" [097868b0-2207-40a0-8638-29d43c76956f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0918 19:39:41.980931 8317 system_pods.go:89] "csi-hostpath-resizer-0" [27f2f88c-98ce-450b-9dd4-39098fa9d3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0918 19:39:41.980940 8317 system_pods.go:89] "csi-hostpathplugin-qg252" [c24860db-28aa-4eca-aa5e-a23c98d972b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0918 19:39:41.980944 8317 system_pods.go:89] "etcd-addons-923322" [16737166-da6e-4bdb-9dc7-f99f689862bd] Running
I0918 19:39:41.980949 8317 system_pods.go:89] "kube-apiserver-addons-923322" [b9c7d74b-a9a4-442a-a79f-cb524b0620fa] Running
I0918 19:39:41.980954 8317 system_pods.go:89] "kube-controller-manager-addons-923322" [0b1c1ad7-9dec-4a2f-8169-ed1ee5b84119] Running
I0918 19:39:41.980960 8317 system_pods.go:89] "kube-ingress-dns-minikube" [22538dc0-3ac3-4849-83e9-9fc02c69f1d9] Running
I0918 19:39:41.980965 8317 system_pods.go:89] "kube-proxy-c2h5g" [ec2420ba-b77d-4ef0-849d-aad464f1ef73] Running
I0918 19:39:41.980969 8317 system_pods.go:89] "kube-scheduler-addons-923322" [74a64aa9-7aa5-4dea-b57e-a60b25beb834] Running
I0918 19:39:41.980979 8317 system_pods.go:89] "metrics-server-84c5f94fbc-hwphq" [b9ffea56-bc3b-4b0e-b302-9726b4125780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0918 19:39:41.980993 8317 system_pods.go:89] "nvidia-device-plugin-daemonset-cddcv" [b574c98b-2a15-4629-9c56-0509a4565cf5] Running
I0918 19:39:41.980998 8317 system_pods.go:89] "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
I0918 19:39:41.981002 8317 system_pods.go:89] "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Running
I0918 19:39:41.981009 8317 system_pods.go:89] "snapshot-controller-56fcc65765-lwgp4" [db3a36fd-16b8-42f2-9ce8-efd2efdbc731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:41.981019 8317 system_pods.go:89] "snapshot-controller-56fcc65765-vp9xg" [4fb8cfa7-1048-4342-b3ef-7f8597d3541e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:41.981023 8317 system_pods.go:89] "storage-provisioner" [4a0413f0-5a79-47ac-856d-06ca4c5730d5] Running
I0918 19:39:41.981031 8317 system_pods.go:126] duration metric: took 202.049608ms to wait for k8s-apps to be running ...
I0918 19:39:41.981037 8317 system_svc.go:44] waiting for kubelet service to be running ....
I0918 19:39:41.981095 8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0918 19:39:41.994510 8317 system_svc.go:56] duration metric: took 13.461902ms WaitForService to wait for kubelet
I0918 19:39:41.994536 8317 kubeadm.go:582] duration metric: took 36.888737118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0918 19:39:41.994556 8317 node_conditions.go:102] verifying NodePressure condition ...
I0918 19:39:42.111134 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:42.175904 8317 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0918 19:39:42.175954 8317 node_conditions.go:123] node cpu capacity is 2
I0918 19:39:42.175970 8317 node_conditions.go:105] duration metric: took 181.407561ms to run NodePressure ...
I0918 19:39:42.175983 8317 start.go:241] waiting for startup goroutines ...
I0918 19:39:42.400907 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:42.609627 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:42.901059 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:43.119661 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:43.401225 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:43.609945 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:43.905504 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:44.110752 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:44.403916 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:44.615652 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:44.902567 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:45.153296 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:45.401721 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:45.610788 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:45.906106 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:46.123589 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:46.401975 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:46.609947 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:46.906066 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:47.110039 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:47.401038 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:47.609505 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:47.903111 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:48.110208 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:48.402300 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:48.609667 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:48.902681 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:49.110204 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:49.408589 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:49.611159 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:49.900511 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:50.110931 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:50.401451 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:50.610771 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:50.901486 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:51.110273 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:51.400617 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:51.609409 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:51.900253 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:52.202002 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:52.400576 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:52.609524 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:52.901235 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:53.109976 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:53.401131 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:53.609369 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:53.900158 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:54.110993 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:54.400539 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:54.609992 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:54.901085 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:55.110167 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:55.404291 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:55.609804 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:55.901079 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:56.109992 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:56.400943 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:56.609215 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:56.901109 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:57.110218 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:57.401467 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:57.609087 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:57.900543 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:58.110293 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:58.485756 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:58.609971 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:58.900633 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:59.109659 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:59.401107 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:39:59.611630 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:59.902025 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:00.170357 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:00.453660 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:00.624565 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:00.902762 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:01.110056 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:01.405340 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:01.609774 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:01.902424 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:02.109292 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:02.400709 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:02.609556 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:02.902340 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:03.111318 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:03.451442 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:03.610133 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:03.901048 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:04.109237 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:04.402237 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:04.610260 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:04.902171 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:05.110779 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:05.401704 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:05.612850 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:05.915456 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:06.110204 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:06.401949 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:06.610231 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:06.901831 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:07.110697 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:07.402138 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:07.609142 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:07.900674 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:08.109759 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:08.402011 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:08.610401 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:08.901813 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:09.109429 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:09.402698 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:09.611364 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:09.902125 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:10.110428 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:10.401992 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:10.610722 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:10.901863 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:11.110147 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:11.401672 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:11.609905 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:11.902241 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:12.109957 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:12.402242 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:12.610167 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:12.901597 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:13.110136 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:13.402152 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:13.609280 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:13.900259 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:14.110017 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:40:14.402182 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:14.609694 8317 kapi.go:107] duration metric: took 55.505161147s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0918 19:40:14.900937 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:15.401211 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:15.900851 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:16.401502 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:16.900568 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:17.400614 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:17.900545 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:18.400391 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:18.901469 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:19.401582 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:19.901843 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:20.401357 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:20.902217 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:21.401325 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:21.901567 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:22.401988 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:22.901080 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:23.401348 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:23.900662 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:24.401319 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:24.901174 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:25.402354 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:25.900828 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:26.413286 8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0918 19:40:26.904812 8317 kapi.go:107] duration metric: took 1m8.508691175s to wait for app.kubernetes.io/name=ingress-nginx ...
I0918 19:40:43.099690 8317 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0918 19:40:43.099715 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:43.599198 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:44.100644 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:44.599733 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:45.101395 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:45.599106 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:46.107043 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:46.600031 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:47.099469 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:47.599834 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:48.100203 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:48.599049 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:49.099826 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:49.599132 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:50.100197 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:50.599871 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:51.100458 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:51.600262 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:52.101158 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:52.600190 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:53.099464 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:53.600181 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:54.099497 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:54.599871 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:55.106558 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:55.600370 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:56.100059 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:56.599590 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:57.099532 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:57.599907 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:58.100204 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:58.600184 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:59.100658 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:59.601171 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:00.215555 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:00.600097 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:01.103583 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:01.599505 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:02.099519 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:02.599551 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:03.099581 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:03.598998 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:04.099991 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:04.599479 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:05.099343 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:05.599482 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:06.100210 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:06.599206 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:07.099466 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:07.598861 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:08.099849 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:08.599466 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:09.099772 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:09.600589 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:10.101076 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:10.600112 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:11.100216 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:11.603590 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:12.100021 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:12.600178 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:13.099436 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:13.599971 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:14.100331 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:14.599493 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:15.100201 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:15.599890 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:16.100075 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:16.600025 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:17.099332 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:17.599214 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:18.100207 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:18.600556 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:19.099990 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:19.600780 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:20.101519 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:20.599853 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:21.099554 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:21.599057 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:22.099665 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:22.599287 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:23.100744 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:23.599294 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:24.100298 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:24.600717 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:25.100494 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:25.599759 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:26.099805 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:26.599904 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:27.100651 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:27.599493 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:28.100638 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:28.599825 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:29.100660 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:29.600240 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:30.121519 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:30.598855 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:31.100427 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:31.599925 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:32.100133 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:32.599544 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:33.099608 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:33.599912 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:34.100654 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:34.599488 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:35.099657 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:35.599343 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:36.100223 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:36.599531 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:37.099300 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:37.598886 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:38.100716 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:38.598958 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:39.100228 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:39.600424 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:40.099565 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:40.599234 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:41.100222 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:41.599537 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:42.102700 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:42.600335 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:43.099001 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:43.600178 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:44.100834 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:44.599985 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:45.167317 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:45.599053 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:46.100909 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:46.600009 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:47.099800 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:47.603403 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:48.100619 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:48.599091 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:49.100316 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:49.599852 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:50.100671 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:50.599203 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:51.101249 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:51.599970 8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:41:52.100439 8317 kapi.go:107] duration metric: took 2m31.004668501s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0918 19:41:52.103619 8317 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-923322 cluster.
I0918 19:41:52.106714 8317 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0918 19:41:52.109335 8317 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0918 19:41:52.112085 8317 out.go:177] * Enabled addons: volcano, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0918 19:41:52.114866 8317 addons.go:510] duration metric: took 2m47.008652771s for enable addons: enabled=[volcano storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0918 19:41:52.114961 8317 start.go:246] waiting for cluster config update ...
I0918 19:41:52.114998 8317 start.go:255] writing updated cluster config ...
I0918 19:41:52.115410 8317 ssh_runner.go:195] Run: rm -f paused
I0918 19:41:52.518642 8317 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0918 19:41:52.521861 8317 out.go:177] * Done! kubectl is now configured to use "addons-923322" cluster and "default" namespace by default
==> Docker <==
Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.643369896Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.646245158Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.648640924Z" level=error msg="Error running exec 11f613f847480c4ffd79f53b0abf9ba46c1d7bfb0d37641442af92526929c535 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.824612519Z" level=info msg="ignoring event" container=575c849999fe389738d2ad410b2ceed3350a6ef9a68d95d6136d8005a0a856c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.194078456Z" level=info msg="ignoring event" container=ba8d0457deb684d9132d822104f7a376fa88b3228861c713a3ded3dfe618bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.194136968Z" level=info msg="ignoring event" container=cfd05a18613995b4270d324e3f50a7ef53adcc8a4d5e1fdd998564b6514fbb00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.360038429Z" level=info msg="ignoring event" container=fc5ed7784b0174cb45fff1b9ad47785fae4d3053c13e2e389caf827787bc1e0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.415908283Z" level=info msg="ignoring event" container=adb85c1097ec0b8d0f96db0328d74d56d3ab9646a094ed65404aab2f604d6421 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:26 addons-923322 dockerd[1288]: time="2024-09-18T19:51:26.060243390Z" level=info msg="ignoring event" container=59da5a4e8be5557cf24a0006a7e5d7ccde4463733bd446b46a4acc7e03cfb324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:26 addons-923322 dockerd[1288]: time="2024-09-18T19:51:26.209050933Z" level=info msg="ignoring event" container=839bb18337cf5c9669be45261274704ec640f36983e22a7425aa73fb3ce79bc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:31 addons-923322 dockerd[1288]: time="2024-09-18T19:51:31.738649882Z" level=info msg="ignoring event" container=345affbde6eca46c178218ee9cc0964ae9988f14488e0bf1b4268d1d34de1954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:32 addons-923322 dockerd[1288]: time="2024-09-18T19:51:32.289973533Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:51:32 addons-923322 dockerd[1288]: time="2024-09-18T19:51:32.293032051Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:51:38 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2007afa69f16fefe6d27d45d25d8677ca8b2554704dc5c7a054b1cff499b250c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 18 19:51:39 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:39Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
Sep 18 19:51:47 addons-923322 dockerd[1288]: time="2024-09-18T19:51:47.807922131Z" level=info msg="ignoring event" container=b8a861ae470ce7b01b9ec00242e1d1cea20128e8ba47d33fc28284b7af1a47c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:48 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5df8d2f53285da6caa5a7b0279936e51b51279a30ed3641307b8b2b18ecaf55/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.058218583Z" level=info msg="ignoring event" container=2f5e92316cafdab025dfa1c5f164e8e01cca4bd2a706c10581755d47ad92b385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.254809257Z" level=info msg="ignoring event" container=371b94d41c801840c3dd27d8e6226905087b8f9c9b99cbaff78ce754c5db6c64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.402817844Z" level=info msg="ignoring event" container=b8e3df567fef6accffb45cc730463edba7764a51d630d4eb60d02bfc88e0ab1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:49 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-rxskq_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.617325171Z" level=info msg="ignoring event" container=5427eb651ef2608d609fbff640d1e252a6aad2469944d2268e70942a35bb989b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.937992994Z" level=info msg="ignoring event" container=16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:50 addons-923322 dockerd[1288]: time="2024-09-18T19:51:50.093060536Z" level=info msg="ignoring event" container=78ffd0289fdbdf49914987bfd884db1c89ae8eb2f708212edbfce466d9e3b21c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:51:50 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:50Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7e3d43634e0bd kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 Less than a second ago Running hello-world-app 0 e5df8d2f53285 hello-world-app-55bf9c44b4-lzqv9
3e8ffad627f1d nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf 11 seconds ago Running nginx 0 2007afa69f16f nginx
e19eb0bfe3034 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 10 minutes ago Running gcp-auth 0 e40f893f157fe gcp-auth-89d5ffd79-x4mf2
79ea640b27b01 registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce 11 minutes ago Running controller 0 5fbb3c62ef5c0 ingress-nginx-controller-bc57996ff-85r62
0fa2755032e59 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited patch 0 34ecb094a8359 ingress-nginx-admission-patch-mfskz
6eb4ee03c3b4e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 11 minutes ago Exited create 0 c056ee282284e ingress-nginx-admission-create-kggkx
1111b9d74e51a marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 11 minutes ago Running yakd 0 57770d279a1d3 yakd-dashboard-67d98fc6b-4wvqd
643a43d953b52 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 a47e0555a710f local-path-provisioner-86d989889c-94sjr
111bb68b4057b gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 cb65a88ee01f0 cloud-spanner-emulator-769b77f747-pkc8f
829714db4af63 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 99da0a499d1c5 nvidia-device-plugin-daemonset-cddcv
fa048570e9486 ba04bb24b9575 12 minutes ago Running storage-provisioner 0 c683a1ce5edda storage-provisioner
4bb2f7fc8d15f 2f6c962e7b831 12 minutes ago Running coredns 0 c698f497c9a95 coredns-7c65d6cfc9-2g4l7
208ba88a814ba 24a140c548c07 12 minutes ago Running kube-proxy 0 02cffbceacd94 kube-proxy-c2h5g
eb06e11940d5d 7f8aa378bb47d 12 minutes ago Running kube-scheduler 0 f3a48bfa4509f kube-scheduler-addons-923322
4e05a51d5d389 d3f53a98c0a9d 12 minutes ago Running kube-apiserver 0 a36bdadd9408c kube-apiserver-addons-923322
a9fec9e8cc3f5 279f381cb3736 12 minutes ago Running kube-controller-manager 0 99566d6fa2aff kube-controller-manager-addons-923322
3fae247a18699 27e3830e14027 12 minutes ago Running etcd 0 d953109c1a7d2 etcd-addons-923322
==> controller_ingress [79ea640b27b0] <==
I0918 19:40:28.947311 6 controller.go:224] "Initial sync, sleeping for 1 second"
I0918 19:40:28.947866 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0918 19:51:37.236669 6 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0918 19:51:37.256979 6 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.02s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.02s testedConfigurationSize:18.1kB}
I0918 19:51:37.257013 6 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
I0918 19:51:37.268209 6 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
W0918 19:51:37.268653 6 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
I0918 19:51:37.268760 6 controller.go:193] "Configuration changes detected, backend reload required"
I0918 19:51:37.271950 6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"2827e562-8fe9-4e0d-8247-4a76a8cb788b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2764", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0918 19:51:37.316647 6 controller.go:213] "Backend successfully reloaded"
I0918 19:51:37.316990 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0918 19:51:40.603150 6 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0918 19:51:40.604653 6 controller.go:193] "Configuration changes detected, backend reload required"
I0918 19:51:40.650618 6 controller.go:213] "Backend successfully reloaded"
I0918 19:51:40.651208 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0918 19:51:48.255417 6 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
I0918 19:51:48.280462 6 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.025s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.025s testedConfigurationSize:26.2kB}
I0918 19:51:48.280553 6 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
I0918 19:51:48.298093 6 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
W0918 19:51:48.298478 6 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
I0918 19:51:48.298561 6 controller.go:193] "Configuration changes detected, backend reload required"
I0918 19:51:48.306246 6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"59b67d79-41d2-4e2d-99cd-04f99addd7c8", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2808", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0918 19:51:48.413837 6 controller.go:213] "Backend successfully reloaded"
I0918 19:51:48.414398 6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.244.0.1 - - [18/Sep/2024:19:51:47 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.001 [default-nginx-80] [] 10.244.0.31:80 615 0.001 200 73850050de79f5e412cbaba4a78632d5
==> coredns [4bb2f7fc8d15] <==
[INFO] 10.244.0.7:52054 - 34547 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096909s
[INFO] 10.244.0.7:56652 - 40345 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002470414s
[INFO] 10.244.0.7:56652 - 43620 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002364513s
[INFO] 10.244.0.7:59068 - 23895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138878s
[INFO] 10.244.0.7:59068 - 33369 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000317509s
[INFO] 10.244.0.7:45545 - 54013 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149889s
[INFO] 10.244.0.7:45545 - 16122 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000280143s
[INFO] 10.244.0.7:58762 - 63616 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090214s
[INFO] 10.244.0.7:58762 - 42116 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00016597s
[INFO] 10.244.0.7:35784 - 27011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057459s
[INFO] 10.244.0.7:35784 - 4541 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114927s
[INFO] 10.244.0.7:46635 - 6507 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001490279s
[INFO] 10.244.0.7:46635 - 54632 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001398097s
[INFO] 10.244.0.7:43296 - 50435 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086834s
[INFO] 10.244.0.7:43296 - 17920 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000042469s
[INFO] 10.244.0.25:47286 - 9272 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000365266s
[INFO] 10.244.0.25:33969 - 27542 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174465s
[INFO] 10.244.0.25:57028 - 36082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167802s
[INFO] 10.244.0.25:40374 - 50158 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000899867s
[INFO] 10.244.0.25:38367 - 41125 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000195888s
[INFO] 10.244.0.25:56918 - 14670 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156027s
[INFO] 10.244.0.25:36978 - 15491 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003010469s
[INFO] 10.244.0.25:33782 - 1131 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002287732s
[INFO] 10.244.0.25:58233 - 63159 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001857015s
[INFO] 10.244.0.25:60355 - 59516 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001095271s
==> describe nodes <==
Name: addons-923322
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=addons-923322
kubernetes.io/os=linux
minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
minikube.k8s.io/name=addons-923322
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-923322
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 18 Sep 2024 19:38:57 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-923322
AcquireTime: <unset>
RenewTime: Wed, 18 Sep 2024 19:51:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 18 Sep 2024 19:47:40 +0000 Wed, 18 Sep 2024 19:38:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 18 Sep 2024 19:47:40 +0000 Wed, 18 Sep 2024 19:38:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 18 Sep 2024 19:47:40 +0000 Wed, 18 Sep 2024 19:38:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 18 Sep 2024 19:47:40 +0000 Wed, 18 Sep 2024 19:38:57 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-923322
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 7230010990fe4ea0a589fc7937513d5d
System UUID: fdce7829-22ca-4a4d-8fcd-8b54819b5e49
Boot ID: 89948b1e-c5b8-41d2-bbb3-b80b856868d6
Kernel Version: 5.15.0-1070-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m18s
default cloud-spanner-emulator-769b77f747-pkc8f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
default hello-world-app-55bf9c44b4-lzqv9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14s
gcp-auth gcp-auth-89d5ffd79-x4mf2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-85r62 100m (5%) 0 (0%) 90Mi (1%) 0 (0%) 12m
kube-system coredns-7c65d6cfc9-2g4l7 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m
kube-system etcd-addons-923322 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 12m
kube-system kube-apiserver-addons-923322 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-addons-923322 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-c2h5g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-addons-923322 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-cddcv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-94sjr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-4wvqd 0 (0%) 0 (0%) 128Mi (1%) 256Mi (3%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 388Mi (4%) 426Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node addons-923322 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node addons-923322 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node addons-923322 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node addons-923322 event: Registered Node addons-923322 in Controller
==> dmesg <==
[Sep18 19:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015410] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.490719] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.720496] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.132493] kauditd_printk_skb: 36 callbacks suppressed
==> etcd [3fae247a1869] <==
{"level":"info","ts":"2024-09-18T19:38:54.127567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-09-18T19:38:54.127730Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-09-18T19:38:54.895295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-09-18T19:38:54.895511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-18T19:38:54.895638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-09-18T19:38:54.895763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-09-18T19:38:54.895858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-18T19:38:54.895963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-09-18T19:38:54.896074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-09-18T19:38:54.899385Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:54.907468Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-923322 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-18T19:38:54.907809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-18T19:38:54.907994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:54.908308Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:54.908449Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:54.908034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-18T19:38:54.908057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-18T19:38:54.909080Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-18T19:38:54.909798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-18T19:38:54.910806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-18T19:38:54.931988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-18T19:38:54.933258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-09-18T19:48:55.274959Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1856}
{"level":"info","ts":"2024-09-18T19:48:55.326157Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1856,"took":"50.363715ms","hash":2043616100,"current-db-size-bytes":8851456,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4837376,"current-db-size-in-use":"4.8 MB"}
{"level":"info","ts":"2024-09-18T19:48:55.326214Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2043616100,"revision":1856,"compact-revision":-1}
==> gcp-auth [e19eb0bfe303] <==
2024/09/18 19:41:50 GCP Auth Webhook started!
2024/09/18 19:42:09 Ready to marshal response ...
2024/09/18 19:42:09 Ready to write response ...
2024/09/18 19:42:09 Ready to marshal response ...
2024/09/18 19:42:09 Ready to write response ...
2024/09/18 19:42:32 Ready to marshal response ...
2024/09/18 19:42:32 Ready to write response ...
2024/09/18 19:42:33 Ready to marshal response ...
2024/09/18 19:42:33 Ready to write response ...
2024/09/18 19:42:33 Ready to marshal response ...
2024/09/18 19:42:33 Ready to write response ...
2024/09/18 19:50:46 Ready to marshal response ...
2024/09/18 19:50:46 Ready to write response ...
2024/09/18 19:50:47 Ready to marshal response ...
2024/09/18 19:50:47 Ready to write response ...
2024/09/18 19:51:02 Ready to marshal response ...
2024/09/18 19:51:02 Ready to write response ...
2024/09/18 19:51:37 Ready to marshal response ...
2024/09/18 19:51:37 Ready to write response ...
2024/09/18 19:51:48 Ready to marshal response ...
2024/09/18 19:51:48 Ready to write response ...
==> kernel <==
19:51:51 up 34 min, 0 users, load average: 1.67, 0.86, 0.71
Linux addons-923322 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [4e05a51d5d38] <==
W0918 19:42:24.752742 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0918 19:42:24.829997 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0918 19:42:24.840343 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0918 19:42:25.315830 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0918 19:42:25.461594 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
I0918 19:50:54.985560 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0918 19:51:17.916955 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0918 19:51:17.916997 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0918 19:51:17.938163 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0918 19:51:17.938204 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0918 19:51:17.952061 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0918 19:51:17.952364 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0918 19:51:17.991611 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0918 19:51:17.991662 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0918 19:51:18.034771 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0918 19:51:18.037248 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0918 19:51:18.938573 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0918 19:51:19.038702 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W0918 19:51:19.102883 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I0918 19:51:31.654386 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0918 19:51:32.785440 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0918 19:51:37.257903 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0918 19:51:37.591305 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.3.16"}
I0918 19:51:46.966522 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0918 19:51:48.593459 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.31.192"}
==> kube-controller-manager [a9fec9e8cc3f] <==
W0918 19:51:36.342731 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:36.342782 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:36.619536 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:36.619600 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:37.576997 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:37.577057 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:39.309641 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:39.309681 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:39.619618 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:39.619666 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:39.652312 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:39.652353 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:40.658663 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:40.658838 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:51:41.606709 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:41.606760 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0918 19:51:41.827439 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
I0918 19:51:48.276450 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.104966ms"
I0918 19:51:48.339857 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.356091ms"
I0918 19:51:48.339933 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.91µs"
I0918 19:51:48.994412 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.128µs"
W0918 19:51:49.868713 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:51:49.868763 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0918 19:51:50.729033 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.006764ms"
I0918 19:51:50.729142 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="67.336µs"
==> kube-proxy [208ba88a814b] <==
I0918 19:39:06.453455 1 server_linux.go:66] "Using iptables proxy"
I0918 19:39:06.606842 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0918 19:39:06.606904 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0918 19:39:06.665750 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0918 19:39:06.665810 1 server_linux.go:169] "Using iptables Proxier"
I0918 19:39:06.669967 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0918 19:39:06.670272 1 server.go:483] "Version info" version="v1.31.1"
I0918 19:39:06.670285 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0918 19:39:06.696607 1 config.go:199] "Starting service config controller"
I0918 19:39:06.696645 1 shared_informer.go:313] Waiting for caches to sync for service config
I0918 19:39:06.696672 1 config.go:105] "Starting endpoint slice config controller"
I0918 19:39:06.696676 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0918 19:39:06.699658 1 config.go:328] "Starting node config controller"
I0918 19:39:06.699675 1 shared_informer.go:313] Waiting for caches to sync for node config
I0918 19:39:06.796782 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0918 19:39:06.796850 1 shared_informer.go:320] Caches are synced for service config
I0918 19:39:06.800826 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [eb06e11940d5] <==
W0918 19:38:57.900689 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0918 19:38:57.900828 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0918 19:38:57.900853 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.900899 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0918 19:38:57.900913 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.900954 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0918 19:38:57.900970 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901013 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:57.901024 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901074 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0918 19:38:57.901090 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901135 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0918 19:38:57.901151 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901217 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0918 19:38:57.901231 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901288 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0918 19:38:57.901302 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.901361 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:57.901375 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0918 19:38:57.901446 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.900743 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0918 19:38:57.901572 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:57.900788 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:57.901663 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0918 19:38:59.088360 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.018014 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633c3e04-6499-4a0c-8b85-df14b292d711-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "633c3e04-6499-4a0c-8b85-df14b292d711" (UID: "633c3e04-6499-4a0c-8b85-df14b292d711"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.022716 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633c3e04-6499-4a0c-8b85-df14b292d711-kube-api-access-wzh47" (OuterVolumeSpecName: "kube-api-access-wzh47") pod "633c3e04-6499-4a0c-8b85-df14b292d711" (UID: "633c3e04-6499-4a0c-8b85-df14b292d711"). InnerVolumeSpecName "kube-api-access-wzh47". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.120308 2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wzh47\" (UniqueName: \"kubernetes.io/projected/633c3e04-6499-4a0c-8b85-df14b292d711-kube-api-access-wzh47\") on node \"addons-923322\" DevicePath \"\""
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.120347 2362 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/633c3e04-6499-4a0c-8b85-df14b292d711-gcp-creds\") on node \"addons-923322\" DevicePath \"\""
Sep 18 19:51:48 addons-923322 kubelet[2362]: E0918 19:51:48.274805 2362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cffcd439-fa27-4718-a834-9509d4c523dd" containerName="gadget"
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.278008 2362 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffcd439-fa27-4718-a834-9509d4c523dd" containerName="gadget"
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.425389 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5ad44129-ae8e-4938-8cb2-8ed92072b2e5-gcp-creds\") pod \"hello-world-app-55bf9c44b4-lzqv9\" (UID: \"5ad44129-ae8e-4938-8cb2-8ed92072b2e5\") " pod="default/hello-world-app-55bf9c44b4-lzqv9"
Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.425491 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx49\" (UniqueName: \"kubernetes.io/projected/5ad44129-ae8e-4938-8cb2-8ed92072b2e5-kube-api-access-chx49\") pod \"hello-world-app-55bf9c44b4-lzqv9\" (UID: \"5ad44129-ae8e-4938-8cb2-8ed92072b2e5\") " pod="default/hello-world-app-55bf9c44b4-lzqv9"
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.548781 2362 scope.go:117] "RemoveContainer" containerID="2f5e92316cafdab025dfa1c5f164e8e01cca4bd2a706c10581755d47ad92b385"
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.683546 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzcfz\" (UniqueName: \"kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz\") pod \"be6aeece-e555-4628-88de-f374e1e78aa3\" (UID: \"be6aeece-e555-4628-88de-f374e1e78aa3\") "
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.704428 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz" (OuterVolumeSpecName: "kube-api-access-tzcfz") pod "be6aeece-e555-4628-88de-f374e1e78aa3" (UID: "be6aeece-e555-4628-88de-f374e1e78aa3"). InnerVolumeSpecName "kube-api-access-tzcfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.784636 2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tzcfz\" (UniqueName: \"kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz\") on node \"addons-923322\" DevicePath \"\""
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.887076 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxdzq\" (UniqueName: \"kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq\") pod \"e2a2228e-559d-447a-953c-77300e373ad5\" (UID: \"e2a2228e-559d-447a-953c-77300e373ad5\") "
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.896488 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq" (OuterVolumeSpecName: "kube-api-access-mxdzq") pod "e2a2228e-559d-447a-953c-77300e373ad5" (UID: "e2a2228e-559d-447a-953c-77300e373ad5"). InnerVolumeSpecName "kube-api-access-mxdzq". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.988034 2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mxdzq\" (UniqueName: \"kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq\") on node \"addons-923322\" DevicePath \"\""
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.106917 2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633c3e04-6499-4a0c-8b85-df14b292d711" path="/var/lib/kubelet/pods/633c3e04-6499-4a0c-8b85-df14b292d711/volumes"
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.394803 2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qhd\" (UniqueName: \"kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd\") pod \"22538dc0-3ac3-4849-83e9-9fc02c69f1d9\" (UID: \"22538dc0-3ac3-4849-83e9-9fc02c69f1d9\") "
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.396991 2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd" (OuterVolumeSpecName: "kube-api-access-44qhd") pod "22538dc0-3ac3-4849-83e9-9fc02c69f1d9" (UID: "22538dc0-3ac3-4849-83e9-9fc02c69f1d9"). InnerVolumeSpecName "kube-api-access-44qhd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.496063 2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-44qhd\" (UniqueName: \"kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd\") on node \"addons-923322\" DevicePath \"\""
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.618756 2362 scope.go:117] "RemoveContainer" containerID="371b94d41c801840c3dd27d8e6226905087b8f9c9b99cbaff78ce754c5db6c64"
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.666300 2362 scope.go:117] "RemoveContainer" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.700932 2362 scope.go:117] "RemoveContainer" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
Sep 18 19:51:50 addons-923322 kubelet[2362]: E0918 19:51:50.702518 2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.702552 2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"} err="failed to get container status \"16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc\": rpc error: code = Unknown desc = Error response from daemon: No such container: 16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.731462 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-lzqv9" podStartSLOduration=1.777165646 podStartE2EDuration="2.731441375s" podCreationTimestamp="2024-09-18 19:51:48 +0000 UTC" firstStartedPulling="2024-09-18 19:51:49.206515985 +0000 UTC m=+769.310666271" lastFinishedPulling="2024-09-18 19:51:50.160791656 +0000 UTC m=+770.264942000" observedRunningTime="2024-09-18 19:51:50.691851755 +0000 UTC m=+770.796002042" watchObservedRunningTime="2024-09-18 19:51:50.731441375 +0000 UTC m=+770.835591662"
==> storage-provisioner [fa048570e948] <==
I0918 19:39:12.937170 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0918 19:39:12.956577 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0918 19:39:12.956625 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0918 19:39:12.971139 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0918 19:39:12.971272 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35ff7f37-9809-4c37-8770-4de917523087", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb became leader
I0918 19:39:12.971421 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb!
I0918 19:39:13.071616 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-923322 -n addons-923322
helpers_test.go:261: (dbg) Run: kubectl --context addons-923322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-923322 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-923322 describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-923322/192.168.49.2
Start Time: Wed, 18 Sep 2024 19:42:33 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.27
IPs:
IP: 10.244.0.27
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrxnf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vrxnf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m19s default-scheduler Successfully assigned default/busybox to addons-923322
Warning Failed 7m55s (x6 over 9m18s) kubelet Error: ImagePullBackOff
Normal Pulling 7m44s (x4 over 9m19s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m44s (x4 over 9m19s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m44s (x4 over 9m19s) kubelet Error: ErrImagePull
Normal BackOff 4m17s (x21 over 9m18s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (75.95s)